Abstract: In this paper we will focus on spatialized decision problems which we propose to model in the framework of (highly) multidimensional Markov Decision Processes (MDPs) which exhibit only local dependencies between variables. We propose to approximate a Markov chain on a multidimensional random variable by a Markov chain on a set of weakly dependent random variables. This allows to (approximately) solve multidimensional MDPs with hundreds of variables, to the price of a loss of exactness of the process model. The method is mostly empirical yet, however it allows to deal with decision problems far larger than the one usually dealt with in the MDP framework
Markov decision processes (MDPs) are models of dynamic decision making under uncertainty. These mode...
Thesis (Ph.D.)--University of Washington, 2013The ability to plan in the presence of uncertainty abo...
As agents are built for ever more complex environments, methods that consider the uncertainty in the...
AbstractThe Markov Decision Process (MDP) framework is a tool for the efficient modelling and solvin...
International audienceThe Markov Decision Process (MDP) framework is a tool for the efficient modell...
Abstract: Markov decision processes (MPDs) have become a popular model for real-world problems of pl...
We present a technique for computing approximately optimal solutions to stochastic resource allocati...
Address email We present an approximation scheme for solving Markov Decision Processes (MDPs) in whi...
This paper is about planning in stochastic domains by means of partially observable Markov decision...
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework t...
Partially observable Markov decision processes (POMDPs) are an appealing tool for modeling planning ...
Abstract. The theory of Markov Decision Processes (MDPs) provides algorithms for generating anoptima...
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework t...
Abstract We introduce Multi-Environment Markov Decision Processes (MEMDPs) which are MDPs with a set...
We describe an extension of the Markov decision process model in which a continuous time dimension i...
Markov decision processes (MDPs) are models of dynamic decision making under uncertainty. These mode...
Thesis (Ph.D.)--University of Washington, 2013The ability to plan in the presence of uncertainty abo...
As agents are built for ever more complex environments, methods that consider the uncertainty in the...
AbstractThe Markov Decision Process (MDP) framework is a tool for the efficient modelling and solvin...
International audienceThe Markov Decision Process (MDP) framework is a tool for the efficient modell...
Abstract: Markov decision processes (MPDs) have become a popular model for real-world problems of pl...
We present a technique for computing approximately optimal solutions to stochastic resource allocati...
Address email We present an approximation scheme for solving Markov Decision Processes (MDPs) in whi...
This paper is about planning in stochastic domains by means of partially observable Markov decision...
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework t...
Partially observable Markov decision processes (POMDPs) are an appealing tool for modeling planning ...
Abstract. The theory of Markov Decision Processes (MDPs) provides algorithms for generating anoptima...
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework t...
Abstract We introduce Multi-Environment Markov Decision Processes (MEMDPs) which are MDPs with a set...
We describe an extension of the Markov decision process model in which a continuous time dimension i...
Markov decision processes (MDPs) are models of dynamic decision making under uncertainty. These mode...
Thesis (Ph.D.)--University of Washington, 2013The ability to plan in the presence of uncertainty abo...
As agents are built for ever more complex environments, methods that consider the uncertainty in the...