We present a heuristic search algorithm for solving first-order MDPs (FOMDPs). Our approach combines first-order state abstraction that avoids evaluating states individually, and heuristic search that avoids evaluating all states. Firstly, we apply state abstraction directly on the FOMDP avoiding propositionalization. Such kind of abstraction is referred to as firstorder state abstraction. Secondly, guided by an admissible heuristic, the search is restricted only to those states that are reachable from the initial state. We demonstrate the usefullness of the above techniques for solving FOMDPs on a system, referred to as FC-Planner, that entered the probabilistic track of the International Planning Competition (IPC’2004).
Most traditional approaches to probabilistic planning in relationally specified MDPs rely on groundi...
AbstractMany traditional solution approaches to relationally specified decision-theoretic planning p...
We present a heuristic-based algorithm for solving restricted Markov decision processes (MDPs). Our ...
We present a heuristic search algorithm for solving first-order Markov Decision Processes (FOMDPs). ...
We describe the version of the GPT planner to be used in the planning competition. This version, cal...
Recent algorithms like RTDP and LAO * combine the strength of Heuristic Search (HS) and Dynamic Prog...
Dynamic programming is a well-known approach for solving MDPs. In large state spaces, asynchronous v...
We describe a planner that participates in the Probabilistic Planning Track of the 2004 Internationa...
Many MDPs exhibit an hierarchical structure where the agent needs to perform various subtasks that a...
International audienceMarkov Decision Processes (MDPs) are employed to model sequential decision-mak...
We describe a planning algorithm that integrates two approaches to solving Markov decision processe...
Dynamic programming is a well-known approach for solv-ing MDPs. In large state spaces, asynchronous ...
AbstractMarkov Decision Processes (MDPs) describe a wide variety of planning scenarios ranging from ...
We propose a heuristic search algorithm for finding optimal policies in a new class of sequential de...
We describe a planning algorithm that integrates two approaches to solving Markov decision processes...
Most traditional approaches to probabilistic planning in relationally specified MDPs rely on groundi...
AbstractMany traditional solution approaches to relationally specified decision-theoretic planning p...
We present a heuristic-based algorithm for solving restricted Markov decision processes (MDPs). Our ...
We present a heuristic search algorithm for solving first-order Markov Decision Processes (FOMDPs). ...
We describe the version of the GPT planner to be used in the planning competition. This version, cal...
Recent algorithms like RTDP and LAO * combine the strength of Heuristic Search (HS) and Dynamic Prog...
Dynamic programming is a well-known approach for solving MDPs. In large state spaces, asynchronous v...
We describe a planner that participates in the Probabilistic Planning Track of the 2004 Internationa...
Many MDPs exhibit an hierarchical structure where the agent needs to perform various subtasks that a...
International audienceMarkov Decision Processes (MDPs) are employed to model sequential decision-mak...
We describe a planning algorithm that integrates two approaches to solving Markov decision processe...
Dynamic programming is a well-known approach for solv-ing MDPs. In large state spaces, asynchronous ...
AbstractMarkov Decision Processes (MDPs) describe a wide variety of planning scenarios ranging from ...
We propose a heuristic search algorithm for finding optimal policies in a new class of sequential de...
We describe a planning algorithm that integrates two approaches to solving Markov decision processes...
Most traditional approaches to probabilistic planning in relationally specified MDPs rely on groundi...
AbstractMany traditional solution approaches to relationally specified decision-theoretic planning p...
We present a heuristic-based algorithm for solving restricted Markov decision processes (MDPs). Our ...