Off-policy reinforcement learning is aimed at efficiently reusing data samples gathered in the past, which is an essential problem for physically grounded AI as experiments are usually prohibitively expensive. A common approach is to use importance sampling techniques for compensating for the bias caused by the difference between data-sampling policies and the target policy. However, existing off-policy methods do not often take the variance of value function estimators explicitly into account and therefore their performance tends to be unstable. To cope with this problem, we propose using an adaptive importance sampling technique which allows us to actively control the trade-off between bias and variance. We further provide a method for op...
We consider the problem of off-policy evaluation (OPE) in reinforcement learning (RL), where the goa...
In importance sampling (IS)-based reinforcement learning algorithms such as Proximal Policy Optimiza...
International audienceAdaptive importance sampling (AIS) uses past samples to update the sampling po...
Off-policy reinforcement learning is aimed at efficiently reusing data samples gathered in the past,...
Off-policy reinforcement learning is aimed at efficiently using data samples gathered from a policy ...
Off-policy reinforcement learning is aimed at efficiently using data samples gathered from a policy ...
How can we effectively exploit the collected samples when solving a continuous control task with Rei...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer...
Importance sampling is often used in machine learning when training and testing data come from diffe...
Abstract In this paper we analyze a particular issue of estimation, namely the estimation of the exp...
A central challenge to applying many off-policy reinforcement learning algorithms to real world prob...
Importance Sampling (IS) is a widely used building block for a large variety of off-policy estimatio...
Adaptive importance sampling is a class of techniques for finding good proposal distributions for im...
This thesis considers three complications that arise from applying reinforcement learning to a real-...
Offline reinforcement learning involves training a decision-making agent based solely on historical ...
We consider the problem of off-policy evaluation (OPE) in reinforcement learning (RL), where the goa...
In importance sampling (IS)-based reinforcement learning algorithms such as Proximal Policy Optimiza...
International audienceAdaptive importance sampling (AIS) uses past samples to update the sampling po...
Off-policy reinforcement learning is aimed at efficiently reusing data samples gathered in the past,...
Off-policy reinforcement learning is aimed at efficiently using data samples gathered from a policy ...
Off-policy reinforcement learning is aimed at efficiently using data samples gathered from a policy ...
How can we effectively exploit the collected samples when solving a continuous control task with Rei...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer...
Importance sampling is often used in machine learning when training and testing data come from diffe...
Abstract In this paper we analyze a particular issue of estimation, namely the estimation of the exp...
A central challenge to applying many off-policy reinforcement learning algorithms to real world prob...
Importance Sampling (IS) is a widely used building block for a large variety of off-policy estimatio...
Adaptive importance sampling is a class of techniques for finding good proposal distributions for im...
This thesis considers three complications that arise from applying reinforcement learning to a real-...
Offline reinforcement learning involves training a decision-making agent based solely on historical ...
We consider the problem of off-policy evaluation (OPE) in reinforcement learning (RL), where the goa...
In importance sampling (IS)-based reinforcement learning algorithms such as Proximal Policy Optimiza...
International audienceAdaptive importance sampling (AIS) uses past samples to update the sampling po...