Abstract — In this paper, we present a robotic model-based reinforcement learning method that combines ideas from model identification and model predictive control. We use a feature-based representation of the dynamics that allows the dynamics model to be fitted with a simple least squares procedure, and the features are identified from a high-level specification of the robot’s morphology, consisting of the number and connectivity structure of its links. Model predictive control is then used to choose the actions under an optimistic model of the dynamics, which produces an efficient and goal-directed exploration strategy. We present real time experimental results on standard benchmark problems involving the pendulum, cartpole, and double pe...
Manipulation tasks such as construction and assembly require reasoning over complex object interacti...
Humans manage to adapt learned movements very quickly to new situations by generalizing learned beha...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2...
Models are among the most essential tools in robotics, such as kinematics and dynamics models of the...
For the locomotion control of a legged robot, both model predictive control (MPC) and reinforcement ...
Model-free reinforcement learning and nonlinear model predictive control are two different approache...
<p>Reinforcement learning offers to robotics a framework and set of tools for the design of sophisti...
Reinforcement Learning (RL) is a popular method in machine learning. In RL, an agent learns a policy...
Abstract—Autonomous learning has been a promising direction in control and robotics for more than a ...
Learning-based approaches are suitable for the control of systems with unknown dynamics. However, le...
Abstract—Models proposed within the literature of motor control have polarised around two classes of...
With recent research advances, the dream of bringing domestic robots into our everyday lives has bec...
Many complex robot motor skills can be represented using elementary movements, and there exist effic...
UnrestrictedAutonomous robots have been a long standing vision of robotics, artificial intelligence,...
Manipulation tasks such as construction and assembly require reasoning over complex object interacti...
Humans manage to adapt learned movements very quickly to new situations by generalizing learned beha...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2...
Models are among the most essential tools in robotics, such as kinematics and dynamics models of the...
For the locomotion control of a legged robot, both model predictive control (MPC) and reinforcement ...
Model-free reinforcement learning and nonlinear model predictive control are two different approache...
<p>Reinforcement learning offers to robotics a framework and set of tools for the design of sophisti...
Reinforcement Learning (RL) is a popular method in machine learning. In RL, an agent learns a policy...
Abstract—Autonomous learning has been a promising direction in control and robotics for more than a ...
Learning-based approaches are suitable for the control of systems with unknown dynamics. However, le...
Abstract—Models proposed within the literature of motor control have polarised around two classes of...
With recent research advances, the dream of bringing domestic robots into our everyday lives has bec...
Many complex robot motor skills can be represented using elementary movements, and there exist effic...
UnrestrictedAutonomous robots have been a long standing vision of robotics, artificial intelligence,...
Manipulation tasks such as construction and assembly require reasoning over complex object interacti...
Humans manage to adapt learned movements very quickly to new situations by generalizing learned beha...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2...