Graduation date: 2017Markov Decision Processes (MDPs) are the de-facto formalism for studying sequential decision making problems with uncertainty, ranging from classical problems such as inventory control and path planning, to more complex problems such as reservoir control under rainfall uncertainty and emergency response optimization for fire and medical emergencies. Most prior research has focused on exact and approximate solutions to MDPs with factored states, assuming a small number of actions. In contrast to this, many applications are most naturally modeled as having factored actions described in terms of multiple action variables. In this thesis we study domain-independent algorithms that leverage the factored action structure in t...
Recent advances in Symbolic Dynamic Programming (SDP) combined with the extended algebraic decision ...
International audienceMarkov Decision Processes (MDPs) are employed to model sequential decision-mak...
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (M...
Hybrid (mixed discrete and continuous) state and action Markov Decision Processes (HSA-MDPs) provide...
Thesis (Ph.D.)--University of Washington, 2013The ability to plan in the presence of uncertainty abo...
Thesis (Ph.D.)--University of Washington, 2013The ability to plan in the presence of uncertainty abo...
We describe a planning algorithm that integrates two approaches to solving Markov decision processes...
Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, whil...
We describe a planning algorithm that integrates two ap-proaches to solving Markov decision processe...
We investigate the use Markov Decision Processes a.s a means of representing worlds in which action...
International audienceThe Markov Decision Process (MDP) framework is a tool for the efficient modell...
We provide a method, based on the theory of Markov decision processes, for efficient planning in sto...
AbstractThe Markov Decision Process (MDP) framework is a tool for the efficient modelling and solvin...
Markov Decision Processes with factored state and action spaces, usually referred to as FA-FMDPs, pr...
We provide a method, based on the theory of Markov decision processes, for efficient planning in st...
Recent advances in Symbolic Dynamic Programming (SDP) combined with the extended algebraic decision ...
International audienceMarkov Decision Processes (MDPs) are employed to model sequential decision-mak...
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (M...
Hybrid (mixed discrete and continuous) state and action Markov Decision Processes (HSA-MDPs) provide...
Thesis (Ph.D.)--University of Washington, 2013The ability to plan in the presence of uncertainty abo...
Thesis (Ph.D.)--University of Washington, 2013The ability to plan in the presence of uncertainty abo...
We describe a planning algorithm that integrates two approaches to solving Markov decision processes...
Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, whil...
We describe a planning algorithm that integrates two ap-proaches to solving Markov decision processe...
We investigate the use Markov Decision Processes a.s a means of representing worlds in which action...
International audienceThe Markov Decision Process (MDP) framework is a tool for the efficient modell...
We provide a method, based on the theory of Markov decision processes, for efficient planning in sto...
AbstractThe Markov Decision Process (MDP) framework is a tool for the efficient modelling and solvin...
Markov Decision Processes with factored state and action spaces, usually referred to as FA-FMDPs, pr...
We provide a method, based on the theory of Markov decision processes, for efficient planning in st...
Recent advances in Symbolic Dynamic Programming (SDP) combined with the extended algebraic decision ...
International audienceMarkov Decision Processes (MDPs) are employed to model sequential decision-mak...
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (M...