We address the problem of learning relationships on state variables in Partially Observable Markov Decision Processes (POMDPs) to improve planning performance. Specifically, we focus on Partially Observable Monte Carlo Planning (POMCP) and represent the acquired knowledge with a Markov Random Field (MRF). We propose, in particular, a method for learning these relationships on a robot as POMCP is used to plan future actions. Then, we present an algorithm that deals with cases in which the MRF is used on episodes having unlikely states with respect to the equality relationships represented by the MRF. Our approach acquires information from the agent's action outcomes to adapt online the MRF if a mismatch is detected between the MRF and the tr...
Partially Observable Markov Decision Process models (POMDPs) have been applied to low-level robot co...
One of the fundamental challenges in the design of autonomous robots is to reliably compute motion s...
Online planning methods for partially observable Markov decision processes (POMDPs) have re- cently ...
We address the problem of learning relationships on state variables in Partially Observable Markov D...
Publisher Copyright: IEEENoisy sensing, imperfect control, and environment changes are defining char...
Planning under partial observability is both challenging and critical for reliable robot operation. ...
Partially observable Markov decision processes (POMDPs) provide a principled, general framework for ...
Autonomous mobile robots employed in industrial applications often operate in complex and uncertain ...
This thesis experimentally addresses the issue of planning under uncertainty in robotics, with refer...
Summarization: Online planning methods for partially observable Markov decision processes (POMDPs) h...
Partially observable Markov decision processes (POMDPs) are a well studied paradigm for programming ...
Partially observable Markov decision processes (pomdp's) model decision problems in which an a...
Projecte final de Màster Oficial fet en col.laboració amb Institut de Robàtica i Informàtica Industr...
People are efficient when they make decisions under uncertainty, even when their decisions have long...
Hidden Markov models (hmms) and partially observable Markov decision processes (pomdps) provide a us...
Partially Observable Markov Decision Process models (POMDPs) have been applied to low-level robot co...
One of the fundamental challenges in the design of autonomous robots is to reliably compute motion s...
Online planning methods for partially observable Markov decision processes (POMDPs) have re- cently ...
We address the problem of learning relationships on state variables in Partially Observable Markov D...
Publisher Copyright: IEEENoisy sensing, imperfect control, and environment changes are defining char...
Planning under partial observability is both challenging and critical for reliable robot operation. ...
Partially observable Markov decision processes (POMDPs) provide a principled, general framework for ...
Autonomous mobile robots employed in industrial applications often operate in complex and uncertain ...
This thesis experimentally addresses the issue of planning under uncertainty in robotics, with refer...
Summarization: Online planning methods for partially observable Markov decision processes (POMDPs) h...
Partially observable Markov decision processes (POMDPs) are a well studied paradigm for programming ...
Partially observable Markov decision processes (pomdp's) model decision problems in which an a...
Projecte final de Màster Oficial fet en col.laboració amb Institut de Robàtica i Informàtica Industr...
People are efficient when they make decisions under uncertainty, even when their decisions have long...
Hidden Markov models (hmms) and partially observable Markov decision processes (pomdps) provide a us...
Partially Observable Markov Decision Process models (POMDPs) have been applied to low-level robot co...
One of the fundamental challenges in the design of autonomous robots is to reliably compute motion s...
Online planning methods for partially observable Markov decision processes (POMDPs) have re- cently ...