Markov decision process (MDP), originally studied in the Operations Research (OR) community, provides a natural framework to model a wide variety of sequential decision making problems. Because of its powerful expressiveness, the AI community has adopted the MDP framework to model complex stochastic planning problems. However, this expressiveness in modeling comes with a hefty price when it comes to solving the MDP model and obtaining an optimal plan. Scaling up solution algorithms for MDPs is thus a critical research topic in AI that has received a lot of attentions. In this thesis I study the role of representation and abstraction in scaling up solution methods for various MDP models. Three variants of MDP models are studied in this thesi...
We address the problem of optimally controlling stochastic environments that are partially observ-ab...
In this paper, we consider planning in stochastic shortest path problems, a subclass of Markov Decis...
As agents are built for ever more complex environments, methods that consider the uncertainty in the...
This paper is about planning in stochastic domains by means of partially observable Markov decision...
Thesis (Ph.D.)--University of Washington, 2013The ability to plan in the presence of uncertainty abo...
Markov decision processes (MDPs) have recently been proposed as useful conceptual models for underst...
We investigate the use Markov Decision Processes a.s a means of representing worlds in which action...
Markov Decision Processes (MDPs) are not able to make use of domain information effectively due to t...
Partially observable Markov decision process (POMDP) can be used as a model for planning in stochast...
Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI r...
Abstract. The theory of Markov Decision Processes (MDPs) provides algorithms for generating anoptima...
AbstractMany stochastic planning problems can be represented using Markov Decision Processes (MDPs)....
Markov Decision Problems (MDPs) are the foundation for many problems that are of interest to researc...
Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, whil...
Les processus décisionnels de Markov (MDP) sont un formalisme mathématique des domaines de l'intelli...
We address the problem of optimally controlling stochastic environments that are partially observ-ab...
In this paper, we consider planning in stochastic shortest path problems, a subclass of Markov Decis...
As agents are built for ever more complex environments, methods that consider the uncertainty in the...
This paper is about planning in stochastic domains by means of partially observable Markov decision...
Thesis (Ph.D.)--University of Washington, 2013The ability to plan in the presence of uncertainty abo...
Markov decision processes (MDPs) have recently been proposed as useful conceptual models for underst...
We investigate the use Markov Decision Processes a.s a means of representing worlds in which action...
Markov Decision Processes (MDPs) are not able to make use of domain information effectively due to t...
Partially observable Markov decision process (POMDP) can be used as a model for planning in stochast...
Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI r...
Abstract. The theory of Markov Decision Processes (MDPs) provides algorithms for generating anoptima...
AbstractMany stochastic planning problems can be represented using Markov Decision Processes (MDPs)....
Markov Decision Problems (MDPs) are the foundation for many problems that are of interest to researc...
Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, whil...
Les processus décisionnels de Markov (MDP) sont un formalisme mathématique des domaines de l'intelli...
We address the problem of optimally controlling stochastic environments that are partially observ-ab...
In this paper, we consider planning in stochastic shortest path problems, a subclass of Markov Decis...
As agents are built for ever more complex environments, methods that consider the uncertainty in the...