Agents often have to construct plans that obey resource lim-its for continuous resources whose consumption can only be characterized by probability distributions. While Markov De-cision Processes (MDPs) with a state space of continuous and discrete variables are popular for modeling these domains, current algorithms for such MDPs can exhibit poor perfor-mance with a scale-up in their state space. To remedy that we propose an algorithm called DPFP. DPFP’s key contribu-tion is its exploitation of the dual space cumulative distribu-tion functions. This dual formulation is key to DPFP’s novel combination of three features. First, it enables DPFP’s mem-bership in a class of algorithms that perform forward search in a large (possibly infinite) po...
Graduation date: 2017Markov Decision Processes (MDPs) are the de-facto formalism for studying sequen...
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (M...
Markov decision processes (MDP) offer a rich model that has been extensively used by the AI communit...
UnrestrictedMy research concentrates on developing reasoning techniques for intelligent, autonomous ...
Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, whil...
Although many real-world stochastic planning problems are more naturally formulated by hybrid models...
Markov decision process (MDP), originally studied in the Operations Research (OR) community, provide...
We propose a novel approach for solving continuous and hybrid Markov Decision Processes (MDPs) based...
International audienceOptimally solving decentralized partially observable Markov decision processes...
In this paper, we present a new algorithm that integrates recent advances in solving continuous band...
Many real-world domains require that agents plan their future ac-tions despite uncertainty, and that...
Optimally solving decentralized partially observ-able Markov decision processes (Dec-POMDPs) is a ha...
This paper is about planning in stochastic domains by means of partially observable Markov decision...
International audienceDecentralized partially observable Markov deci- sion processes (Dec-POMDPs) pr...
While Markov Decision Processes (MDPs) have been shown to be effective models for planning under unc...
Graduation date: 2017Markov Decision Processes (MDPs) are the de-facto formalism for studying sequen...
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (M...
Markov decision processes (MDP) offer a rich model that has been extensively used by the AI communit...
UnrestrictedMy research concentrates on developing reasoning techniques for intelligent, autonomous ...
Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, whil...
Although many real-world stochastic planning problems are more naturally formulated by hybrid models...
Markov decision process (MDP), originally studied in the Operations Research (OR) community, provide...
We propose a novel approach for solving continuous and hybrid Markov Decision Processes (MDPs) based...
International audienceOptimally solving decentralized partially observable Markov decision processes...
In this paper, we present a new algorithm that integrates recent advances in solving continuous band...
Many real-world domains require that agents plan their future ac-tions despite uncertainty, and that...
Optimally solving decentralized partially observ-able Markov decision processes (Dec-POMDPs) is a ha...
This paper is about planning in stochastic domains by means of partially observable Markov decision...
International audienceDecentralized partially observable Markov deci- sion processes (Dec-POMDPs) pr...
While Markov Decision Processes (MDPs) have been shown to be effective models for planning under unc...
Graduation date: 2017Markov Decision Processes (MDPs) are the de-facto formalism for studying sequen...
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (M...
Markov decision processes (MDP) offer a rich model that has been extensively used by the AI communit...