How does uncertainty affect a robot when attempting to generate a control policy to achieve some objective? How sensitive is the obtained control policy to perturbations? These are the central questions addressed in this dissertation. For most real-world robotic systems, the state of the system is observed only indirectly through limited sensor modalities. Since the actual state of the robot is not fully observable, partially observable information is all that is available to infer the state of the system. Further complicating matters, the system may be subject to disturbances that not only perturb the evolution of the system but also perturb the sensor data. Determining policies to effectively and efficiently govern the behavior of the sys...
Partially Observable Markov Decision Process models (POMDPs) have been applied to low-level robot c...
Stochastic motion planning is of crucial importance in robotic applications not only because of the ...
The problem of making optimal decisions in uncertain conditions is central to Artificial Intelligenc...
How does uncertainty affect a robot when attempting to generate a control policy to achieve some obj...
This dissertation addresses the problem of stochastic optimal control with imper-fect measurements. ...
In the real world, robots operate with imperfect sensors providing uncertain and incomplete informat...
We propose a new method for learning policies for large, partially observable Markov decision proces...
Decision-making for autonomous systems acting in real world domains are complex and difficult to for...
RECENT research in the field of robotics has demonstrated the utility of probabilistic models for pe...
This thesis experimentally addresses the issue of planning under uncertainty in robotics, with refer...
The operation of a variety of natural or man-made systems subject to uncertainty is maintained withi...
This paper proposes a simulation-based active policy learning algorithm for finite-horizon, partiall...
Publisher Copyright: IEEENoisy sensing, imperfect control, and environment changes are defining char...
This paper proposes a simulation-based active policy learning algorithm for finite-horizon, partiall...
proaches rely on samples to either obtain an estimate of the value function or a linearisation of th...
Partially Observable Markov Decision Process models (POMDPs) have been applied to low-level robot c...
Stochastic motion planning is of crucial importance in robotic applications not only because of the ...
The problem of making optimal decisions in uncertain conditions is central to Artificial Intelligenc...
How does uncertainty affect a robot when attempting to generate a control policy to achieve some obj...
This dissertation addresses the problem of stochastic optimal control with imper-fect measurements. ...
In the real world, robots operate with imperfect sensors providing uncertain and incomplete informat...
We propose a new method for learning policies for large, partially observable Markov decision proces...
Decision-making for autonomous systems acting in real world domains are complex and difficult to for...
RECENT research in the field of robotics has demonstrated the utility of probabilistic models for pe...
This thesis experimentally addresses the issue of planning under uncertainty in robotics, with refer...
The operation of a variety of natural or man-made systems subject to uncertainty is maintained withi...
This paper proposes a simulation-based active policy learning algorithm for finite-horizon, partiall...
Publisher Copyright: IEEENoisy sensing, imperfect control, and environment changes are defining char...
This paper proposes a simulation-based active policy learning algorithm for finite-horizon, partiall...
proaches rely on samples to either obtain an estimate of the value function or a linearisation of th...
Partially Observable Markov Decision Process models (POMDPs) have been applied to low-level robot c...
Stochastic motion planning is of crucial importance in robotic applications not only because of the ...
The problem of making optimal decisions in uncertain conditions is central to Artificial Intelligenc...