We study planning in relational Markov Decision Processes involving discrete and continuous states and actions. This combination of hybrid relational domains has so far not received a lot of attention. While several symbolic approaches have been proposed for hybrid and relational domains separately, they generally do not provide an integrated approach and they often make restrictive assumptions to make exact inference possible. Removing those restrictions requires approximations such as Monte-Carlo methods. We propose HyBrel: a sample-based planner for hybrid relational domains that combines model-based approaches with state abstraction. HyBrel samples episodes and uses the previous episodes as well as the model to approximate the Q-fun...
Markov decision processes capture sequential decision making under uncertainty, where an agent must ...
In this paper we present a new method for reinforcement learning in relational domains. A logical la...
We study an approach to policy selection for large relational Markov Decision Processes (MDPs). We c...
We study planning in relational Markov decision processes involving discrete and continuous states a...
We study planning in relational Markov decision processes involving discrete and continuous states a...
Relational Markov Decision Processes (MDP) are a use-ful abstraction for stochastic planning problem...
Relational Markov Decision Processes (MDP) are a useful abstraction for stochastic planning problems...
Typical approaches to relational MDPs consider only discrete variables or else discretize the contin...
A longstanding goal in planning research is the ability to generalize plans developed for some set o...
A longstanding goal in planning research is the ability to generalize plans developed for some set o...
Markov Decision Processes(MDPs) are the standard for sequential decision making. Comprehensive theor...
We consider the general framework of first-order decision-theoretic planning in structured relationa...
Abstract. We formalize a simple but natural subclass of service domains for re-lational planning pro...
AbstractMany traditional solution approaches to relationally specified decision-theoretic planning p...
Probabilistic planners are very flexible tools that can provide good solutions for difficult tasks. ...
Markov decision processes capture sequential decision making under uncertainty, where an agent must ...
In this paper we present a new method for reinforcement learning in relational domains. A logical la...
We study an approach to policy selection for large relational Markov Decision Processes (MDPs). We c...
We study planning in relational Markov decision processes involving discrete and continuous states a...
We study planning in relational Markov decision processes involving discrete and continuous states a...
Relational Markov Decision Processes (MDP) are a use-ful abstraction for stochastic planning problem...
Relational Markov Decision Processes (MDP) are a useful abstraction for stochastic planning problems...
Typical approaches to relational MDPs consider only discrete variables or else discretize the contin...
A longstanding goal in planning research is the ability to generalize plans developed for some set o...
A longstanding goal in planning research is the ability to generalize plans developed for some set o...
Markov Decision Processes(MDPs) are the standard for sequential decision making. Comprehensive theor...
We consider the general framework of first-order decision-theoretic planning in structured relationa...
Abstract. We formalize a simple but natural subclass of service domains for re-lational planning pro...
AbstractMany traditional solution approaches to relationally specified decision-theoretic planning p...
Probabilistic planners are very flexible tools that can provide good solutions for difficult tasks. ...
Markov decision processes capture sequential decision making under uncertainty, where an agent must ...
In this paper we present a new method for reinforcement learning in relational domains. A logical la...
We study an approach to policy selection for large relational Markov Decision Processes (MDPs). We c...