AbstractMany stochastic planning problems can be represented using Markov Decision Processes (MDPs). A difficulty with using these MDP representations is that the common algorithms for solving them run in time polynomial in the size of the state space, where this size is extremely large for most real-world planning problems of interest. Recent AI research has addressed this problem by representing the MDP in a factored form. Factored MDPs, however, are not amenable to traditional solution methods that call for an explicit enumeration of the state space. One familiar way to solve MDP problems with very large state spaces is to form a reduced (or aggregated) MDP with the same properties as the original MDP by combining “equivalent” states. In...
This paper is about planning in stochastic domains by means of partially observable Markov decision...
The solution of Markov Decision Processes (MDPs) often relies on special properties of the processes...
This paper is concerned with modeling planning problems involving uncertainty as discrete-time, fini...
AbstractMany stochastic planning problems can be represented using Markov Decision Processes (MDPs)....
Markov decision process (MDP), originally studied in the Operations Research (OR) community, provide...
Model minimization in Factored Markov Decision Processes (FMDPs) is concerned with finding the most ...
This paper provides new techniques for abstracting the state space of a Markov Decision Process (MD...
In many real-world applications of Markov Decision Processes (MPDs), the number of states is so larg...
We present new algorithms for computing and approximating bisimulation metrics in Markov Decision Pr...
Markov decision processes (MDPs) have recently been proposed as useful conceptual models for underst...
Markov decision processes (MDP) offer a rich model that has been extensively used by the AI communit...
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (M...
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (M...
Abstract. The theory of Markov Decision Processes (MDPs) provides algorithms for generating anoptima...
We present a class of metrics, defined on the state space of a finite Markov decision process (MDP)...
This paper is about planning in stochastic domains by means of partially observable Markov decision...
The solution of Markov Decision Processes (MDPs) often relies on special properties of the processes...
This paper is concerned with modeling planning problems involving uncertainty as discrete-time, fini...
AbstractMany stochastic planning problems can be represented using Markov Decision Processes (MDPs)....
Markov decision process (MDP), originally studied in the Operations Research (OR) community, provide...
Model minimization in Factored Markov Decision Processes (FMDPs) is concerned with finding the most ...
This paper provides new techniques for abstracting the state space of a Markov Decision Process (MD...
In many real-world applications of Markov Decision Processes (MPDs), the number of states is so larg...
We present new algorithms for computing and approximating bisimulation metrics in Markov Decision Pr...
Markov decision processes (MDPs) have recently been proposed as useful conceptual models for underst...
Markov decision processes (MDP) offer a rich model that has been extensively used by the AI communit...
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (M...
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (M...
Abstract. The theory of Markov Decision Processes (MDPs) provides algorithms for generating anoptima...
We present a class of metrics, defined on the state space of a finite Markov decision process (MDP)...
This paper is about planning in stochastic domains by means of partially observable Markov decision...
The solution of Markov Decision Processes (MDPs) often relies on special properties of the processes...
This paper is concerned with modeling planning problems involving uncertainty as discrete-time, fini...