peer reviewedWe show that the problem of finding an optimal stochastic 'blind' controller in a Markov decision process is an NP-hard problem. The corresponding decision problem is NP-hard, in PSPACE, and SQRT-SUM-hard, hence placing it in NP would imply a breakthrough in long-standing open problems in computer science. Our optimization result establishes that the more general problem of stochastic controller optimization in POMDPs is also NP-hard. Nonetheless, we outline a special case that is is convex and admits efficient global solutions
Constrained partially observable Markov decision processes (CPOMDPs) have been used to model various...
We consider Markov decision processes (MDPs) with specifications given as Büchi (liveness) objective...
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework t...
It was recently shown that computing an optimal stochastic controller in a discounted in finite-hor...
Developing scalable algorithms for solving partially observable Markov decision processes (POMDPs) i...
Markov Decision Processes (Mdps) form a versatile framework used to model a wide range of optimizati...
This work surveys results on the complexity of planning under uncertainty. The planning model consid...
The search for finite-state controllers for partially observable Markov decision processes (POMDPs) ...
Optimal policy computation in finite-horizon Markov decision processes is a classical problem in opt...
47 pages, 3 figuresThis paper introduces algorithms for problems where a decision maker has to contr...
AbstractIn this paper, we bring techniques from operations research to bear on the problem of choosi...
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework t...
Partially observable Markov decision process (POMDP) can be used as a model for planning in stochast...
We consider partially observable Markov decision processes (POMDPs) with omega-regular conditions sp...
Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI r...
Constrained partially observable Markov decision processes (CPOMDPs) have been used to model various...
We consider Markov decision processes (MDPs) with specifications given as Büchi (liveness) objective...
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework t...
It was recently shown that computing an optimal stochastic controller in a discounted in finite-hor...
Developing scalable algorithms for solving partially observable Markov decision processes (POMDPs) i...
Markov Decision Processes (Mdps) form a versatile framework used to model a wide range of optimizati...
This work surveys results on the complexity of planning under uncertainty. The planning model consid...
The search for finite-state controllers for partially observable Markov decision processes (POMDPs) ...
Optimal policy computation in finite-horizon Markov decision processes is a classical problem in opt...
47 pages, 3 figuresThis paper introduces algorithms for problems where a decision maker has to contr...
AbstractIn this paper, we bring techniques from operations research to bear on the problem of choosi...
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework t...
Partially observable Markov decision process (POMDP) can be used as a model for planning in stochast...
We consider partially observable Markov decision processes (POMDPs) with omega-regular conditions sp...
Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI r...
Constrained partially observable Markov decision processes (CPOMDPs) have been used to model various...
We consider Markov decision processes (MDPs) with specifications given as Büchi (liveness) objective...
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework t...