We consider the problem of learning action models for planning in unknown stochastic environments that can be defined using the Probabilistic Planning Domain Description Language (PPDDL). As input, we are given a set of previously executed trajectories, and the main challenge is to learn an action model that has a similar goal achievement probability to the policies used to create these trajectories. To this end, we introduce a variant of PPDDL in which there is uncertainty about the transition probabilities, specified by an interval for each factor that contains the respective true transition probabilities. Then, we present SAM+, an algorithm that learns such an imprecise-PPDDL environment model. SAM+ has a polynomial time and sample compl...
AbstractAI planning requires the definition of action models using a formal action and plan descript...
Powerful domain-independent planners have been developed to solve various types of planning problems...
This thesis investigates the following question: Can supervised learning techniques be successfully ...
Using machine learning techniques for planning is getting increasingly more important in recent year...
Using machine learning techniques for planning is getting in-creasingly more important in recent yea...
Formal methods based on the Markov decision process formalism, such as probabilistic computation tre...
In this article, we work towards the goal of developing agents that can learn to act in complex worl...
Probabilistic planners are very flexible tools that provide good solutions for difficult tasks. Howe...
The propositional contingent planner ZANDER solves finite-horizon, partially observable, probabilist...
Probabilistic planners are very flexible tools that can provide good solutions for difficult tasks. ...
Probabilistic planners are very flexible tools that can provide good solutions for difficult tasks. ...
To quickly achieve good performance, reinforcement-learning algorithms for acting in large continuou...
We propose a new method for learning policies for large, partially observable Markov decision proces...
To learn to behave in highly complex domains, agents must represent and learn compact models of the ...
Actions description languages (ADLs), such as STRIPS, PDDL, and RDDL specify the input format for pl...
AbstractAI planning requires the definition of action models using a formal action and plan descript...
Powerful domain-independent planners have been developed to solve various types of planning problems...
This thesis investigates the following question: Can supervised learning techniques be successfully ...
Using machine learning techniques for planning is getting increasingly more important in recent year...
Using machine learning techniques for planning is getting in-creasingly more important in recent yea...
Formal methods based on the Markov decision process formalism, such as probabilistic computation tre...
In this article, we work towards the goal of developing agents that can learn to act in complex worl...
Probabilistic planners are very flexible tools that provide good solutions for difficult tasks. Howe...
The propositional contingent planner ZANDER solves finite-horizon, partially observable, probabilist...
Probabilistic planners are very flexible tools that can provide good solutions for difficult tasks. ...
Probabilistic planners are very flexible tools that can provide good solutions for difficult tasks. ...
To quickly achieve good performance, reinforcement-learning algorithms for acting in large continuou...
We propose a new method for learning policies for large, partially observable Markov decision proces...
To learn to behave in highly complex domains, agents must represent and learn compact models of the ...
Actions description languages (ADLs), such as STRIPS, PDDL, and RDDL specify the input format for pl...
AbstractAI planning requires the definition of action models using a formal action and plan descript...
Powerful domain-independent planners have been developed to solve various types of planning problems...
This thesis investigates the following question: Can supervised learning techniques be successfully ...