Recent research in decision theoretic planning has focussed on making the solution of Markov decision processes (MDPs) more feasible. We develop a set of algorithms for structured. reachability analysis of MDP.s that are suitable when an initial state (or set of states) is known. Using compact, structured representations of MDPs (e.g., Bayesian etworks), our methods---which vary in the tradeoff between complexity and accuracy---produce structured descriptions of (estimated) reachable states that can be used to eliminate variables or variables values from the problem description, reducing the size of the MDP and making it easier to solve. Furthermore, the results of our methods can be used by existing (exact and approximate) abstraction algo...
This thesis is about chance and choice, or decisions under uncertainty. The desire for creating an ...
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework t...
This paper is about planning in stochastic domains by means of partially observable Markov decision...
We report on new strategies for model checking quantitative reachability properties of Markov decisi...
Markov decision process (MDP), originally studied in the Operations Research (OR) community, provide...
Markov decision processes (MDPs) have recently been proposed as useful conceptual models for underst...
We consider the problem of approximating the reachability probabilities in Markov decision processes...
AbstractVerification of reachability properties for probabilistic systems is usually based on varian...
Markov Decision Problems (MDPs) are the foundation for many problems that are of interest to researc...
Abstract. We report on a novel development to model check quantita-tive reachability properties on M...
We investigate the use Markov Decision Processes a.s a means of representing worlds in which action...
Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI r...
This paper studies parametric Markov decision processes (pMDPs), an extension to Markov decision pro...
Markov decision processes (MDPs) are models of dynamic decision making under uncertainty. These mode...
Abstract. We report on new strategies for model checking quantita-tive reachability properties of Ma...
This thesis is about chance and choice, or decisions under uncertainty. The desire for creating an ...
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework t...
This paper is about planning in stochastic domains by means of partially observable Markov decision...
We report on new strategies for model checking quantitative reachability properties of Markov decisi...
Markov decision process (MDP), originally studied in the Operations Research (OR) community, provide...
Markov decision processes (MDPs) have recently been proposed as useful conceptual models for underst...
We consider the problem of approximating the reachability probabilities in Markov decision processes...
AbstractVerification of reachability properties for probabilistic systems is usually based on varian...
Markov Decision Problems (MDPs) are the foundation for many problems that are of interest to researc...
Abstract. We report on a novel development to model check quantita-tive reachability properties on M...
We investigate the use Markov Decision Processes a.s a means of representing worlds in which action...
Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI r...
This paper studies parametric Markov decision processes (pMDPs), an extension to Markov decision pro...
Markov decision processes (MDPs) are models of dynamic decision making under uncertainty. These mode...
Abstract. We report on new strategies for model checking quantita-tive reachability properties of Ma...
This thesis is about chance and choice, or decisions under uncertainty. The desire for creating an ...
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework t...
This paper is about planning in stochastic domains by means of partially observable Markov decision...