We study upper and lower bounds on the sample-complexity of learning near-optimal behaviour in finite-state discounted Markov Decision Processes (mdps). We prove a new bound for a modified version of Upper Confidence Reinforcement Learning (ucrl) with only cubic dependence on the horizon. The bound is unimprovable in all parameters except the size of the state/action space, where it depends linearly on the number of non-zero transition probabilities. The lower bound strengthens previous work by being both more general (it applies to all policies) and tighter. The upper and lower bounds match up to logarithmic factors provided the transition matrix is not too dense
Any reinforcement learning algorithm that applies to all Markov decision processes (MDPs) will suer ...
Any reinforcement learning algorithm that applies to all Markov decision processes (MDPs) will suer ...
International audienceIn this paper, we propose new problem-independent lower bounds on the sample c...
We study upper and lower bounds on the sample-complexity of learning near-optimal behaviour in finit...
We study upper and lower bounds on the sample-complexity of learning near-optimal behaviour in finit...
We study upper and lower bounds on the sample-complexity of learning near-optimal behaviour in finit...
International audienceIn probably approximately correct (PAC) reinforcement learning (RL), an agent ...
International audienceIn probably approximately correct (PAC) reinforcement learning (RL), an agent ...
In probably approximately correct (PAC) reinforcement learning (RL), an agent is required to identif...
International audienceIn probably approximately correct (PAC) reinforcement learning (RL), an agent ...
International audienceIn probably approximately correct (PAC) reinforcement learning (RL), an agent ...
International audienceIn probably approximately correct (PAC) reinforcement learning (RL), an agent ...
International audienceIn probably approximately correct (PAC) reinforcement learning (RL), an agent ...
Abstract We consider the problem of learning the optimal action-value func-tion in discounted-reward...
Abstract We consider the problem of learning the optimal action-value func-tion in discounted-reward...
Any reinforcement learning algorithm that applies to all Markov decision processes (MDPs) will suer ...
Any reinforcement learning algorithm that applies to all Markov decision processes (MDPs) will suer ...
International audienceIn this paper, we propose new problem-independent lower bounds on the sample c...
We study upper and lower bounds on the sample-complexity of learning near-optimal behaviour in finit...
We study upper and lower bounds on the sample-complexity of learning near-optimal behaviour in finit...
We study upper and lower bounds on the sample-complexity of learning near-optimal behaviour in finit...
International audienceIn probably approximately correct (PAC) reinforcement learning (RL), an agent ...
International audienceIn probably approximately correct (PAC) reinforcement learning (RL), an agent ...
In probably approximately correct (PAC) reinforcement learning (RL), an agent is required to identif...
International audienceIn probably approximately correct (PAC) reinforcement learning (RL), an agent ...
International audienceIn probably approximately correct (PAC) reinforcement learning (RL), an agent ...
International audienceIn probably approximately correct (PAC) reinforcement learning (RL), an agent ...
International audienceIn probably approximately correct (PAC) reinforcement learning (RL), an agent ...
Abstract We consider the problem of learning the optimal action-value func-tion in discounted-reward...
Abstract We consider the problem of learning the optimal action-value func-tion in discounted-reward...
Any reinforcement learning algorithm that applies to all Markov decision processes (MDPs) will suer ...
Any reinforcement learning algorithm that applies to all Markov decision processes (MDPs) will suer ...
International audienceIn this paper, we propose new problem-independent lower bounds on the sample c...