Uncertain partially observable Markov decision processes (uPOMDPs) allow the probabilistic transition and observation functions of standard POMDPs to belong to a so-called uncertainty set. Such uncertainty, referred to as epistemic uncertainty, captures uncountable sets of probability distributions caused by, for instance, a lack of data available. We develop an algorithm to compute finite-memory policies for uPOMDPs that robustly satisfy specifications against any admissible distribution. In general, computing such policies is theoretically and practically intractable. We provide an efficient solution to this problem in four steps. (1) We state the underlying problem as a nonconvex optimization problem with infinitely many constraints. (2)...
Partially observable MDPs provide an elegant framework forsequential decision making. Finite-state c...
Standard value function approaches to finding policies for Partially Observable Markov Decision Proc...
Standard value function approaches to finding policies for Partially Observable Markov Decision Proc...
Uncertain partially observable Markov decision processes (uPOMDPs) allow the probabilistic transitio...
A partially observable Markov decision process (POMDP) is a model of planning and control that enabl...
We consider the problem of designing policies for partially observable Markov decision processes (PO...
Planning under uncertainty involves two distinct sources of uncertainty: uncertainty about the effec...
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework t...
Markov Decision Problems, MDPs offer an effective mechanism for planning under uncertainty. However,...
Markov Decision Problems, MDPs offer an effective mechanism for planning under uncertainty. However,...
Markov Decision Problems, MDPs offer an effective mech-anism for planning under uncertainty. However...
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework t...
We present a memory-bounded optimization approach for solving infinite-horizon decen-tralized POMDPs...
As agents are built for ever more complex environments, methods that consider the uncertainty in the...
We present a method for designing robust controllers for dynamical systems with linear temporal logi...
Partially observable MDPs provide an elegant framework forsequential decision making. Finite-state c...
Standard value function approaches to finding policies for Partially Observable Markov Decision Proc...
Standard value function approaches to finding policies for Partially Observable Markov Decision Proc...
Uncertain partially observable Markov decision processes (uPOMDPs) allow the probabilistic transitio...
A partially observable Markov decision process (POMDP) is a model of planning and control that enabl...
We consider the problem of designing policies for partially observable Markov decision processes (PO...
Planning under uncertainty involves two distinct sources of uncertainty: uncertainty about the effec...
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework t...
Markov Decision Problems, MDPs offer an effective mechanism for planning under uncertainty. However,...
Markov Decision Problems, MDPs offer an effective mechanism for planning under uncertainty. However,...
Markov Decision Problems, MDPs offer an effective mech-anism for planning under uncertainty. However...
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework t...
We present a memory-bounded optimization approach for solving infinite-horizon decen-tralized POMDPs...
As agents are built for ever more complex environments, methods that consider the uncertainty in the...
We present a method for designing robust controllers for dynamical systems with linear temporal logi...
Partially observable MDPs provide an elegant framework forsequential decision making. Finite-state c...
Standard value function approaches to finding policies for Partially Observable Markov Decision Proc...
Standard value function approaches to finding policies for Partially Observable Markov Decision Proc...