AbstractMarkov Decision Processes (MDPs) describe a wide variety of planning scenarios ranging from military operations planning to controlling a Mars rover. However, todayʼs solution techniques scale poorly, limiting MDPsʼ practical applicability. In this work, we propose algorithms that automatically discover and exploit the hidden structure of factored MDPs. Doing so helps solve MDPs faster and with less memory than state-of-the-art techniques.Our algorithms discover two complementary state abstractions — basis functions and nogoods. A basis function is a conjunction of literals; if the conjunction holds true in a state, this guarantees the existence of at least one trajectory to the goal. Conversely, a nogood is a conjunction whose pres...
This dissertation investigates the problem of representation discovery in discrete Markov decision p...
Markov decision process (MDP), originally studied in the Operations Research (OR) community, provide...
We present a heuristic search algorithm for solving first-order MDPs (FOMDPs). Our approach combines...
AbstractMarkov Decision Processes (MDPs) describe a wide variety of planning scenarios ranging from ...
International audienceMarkov Decision Processes (MDPs) are employed to model sequential decision-mak...
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (M...
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (M...
Abstract: Algorithms for provably experience-efficient exploration of MDPs have been generalized to ...
Solving Markov decision processes (MDPs) efficiently is challenging in many cases, for example, when...
This dissertation investigates the problem of representation discovery in discrete Markov decision p...
The results of the latest International Probabilistic Planning Competition (IPPC-2008) indicate that...
This paper provides new techniques for abstracting the state space of a Markov Decision Process (MD...
The ease or difficulty in solving a problem strongly depends on the way it is represented. For examp...
Graduation date: 2017Markov Decision Processes (MDPs) are the de-facto formalism for studying sequen...
Markov decision processes (MDP) offer a rich model that has been extensively used by the AI communit...
This dissertation investigates the problem of representation discovery in discrete Markov decision p...
Markov decision process (MDP), originally studied in the Operations Research (OR) community, provide...
We present a heuristic search algorithm for solving first-order MDPs (FOMDPs). Our approach combines...
AbstractMarkov Decision Processes (MDPs) describe a wide variety of planning scenarios ranging from ...
International audienceMarkov Decision Processes (MDPs) are employed to model sequential decision-mak...
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (M...
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (M...
Abstract: Algorithms for provably experience-efficient exploration of MDPs have been generalized to ...
Solving Markov decision processes (MDPs) efficiently is challenging in many cases, for example, when...
This dissertation investigates the problem of representation discovery in discrete Markov decision p...
The results of the latest International Probabilistic Planning Competition (IPPC-2008) indicate that...
This paper provides new techniques for abstracting the state space of a Markov Decision Process (MD...
The ease or difficulty in solving a problem strongly depends on the way it is represented. For examp...
Graduation date: 2017Markov Decision Processes (MDPs) are the de-facto formalism for studying sequen...
Markov decision processes (MDP) offer a rich model that has been extensively used by the AI communit...
This dissertation investigates the problem of representation discovery in discrete Markov decision p...
Markov decision process (MDP), originally studied in the Operations Research (OR) community, provide...
We present a heuristic search algorithm for solving first-order MDPs (FOMDPs). Our approach combines...