International audienceWe consider a reinforcement learning setting where the learner does not have explicit access to the states of the underlying Markov decision process (MDP). Instead, she has access to several models that map histories of past interactions to states. Here we improve over known regret bounds in this setting, and more importantly generalize to the case where the models given to the learner do not contain a true model resulting in an MDP representation but only approximations of it. We also give improved error bounds for state aggregation
We provide an algorithm that achieves the optimal regret rate in an unknown weakly communicating Mar...
Sequential decision making from experience, or reinforcement learning (RL), is a paradigm that is we...
We consider a class of sequential decision making problems in the presence of uncertainty, which bel...
International audienceWe consider a reinforcement learning setting where the learner does not have e...
We consider an agent interacting with an environment in a single stream of actions, observations, an...
We consider an agent interacting with an en-vironment in a single stream of actions, ob-servations, ...
The problem of selecting the right state-representation in a reinforcement learning problem is consi...
International audienceWe consider the problem of online reinforcement learning when several state re...
International audienceWe consider a reinforcement learning setting where the learner also has to dea...
International audienceWe study the role of the representation of state-action value functions in reg...
We consider a Reinforcement Learning setup without any (esp. MDP) assumptions on the environment. St...
Abstract. We address the problem of model-based reinforcement learning in in-finite state spaces. On...
International audienceThe problem of reinforcement learning in an unknown and discrete Markov Decisi...
Any reinforcement learning algorithm that applies to all Markov decision processes (MDPs) will suer ...
In this paper, we revisit the regret of undiscounted reinforcement learning in MDPs with a birth and...
We provide an algorithm that achieves the optimal regret rate in an unknown weakly communicating Mar...
Sequential decision making from experience, or reinforcement learning (RL), is a paradigm that is we...
We consider a class of sequential decision making problems in the presence of uncertainty, which bel...
International audienceWe consider a reinforcement learning setting where the learner does not have e...
We consider an agent interacting with an environment in a single stream of actions, observations, an...
We consider an agent interacting with an en-vironment in a single stream of actions, ob-servations, ...
The problem of selecting the right state-representation in a reinforcement learning problem is consi...
International audienceWe consider the problem of online reinforcement learning when several state re...
International audienceWe consider a reinforcement learning setting where the learner also has to dea...
International audienceWe study the role of the representation of state-action value functions in reg...
We consider a Reinforcement Learning setup without any (esp. MDP) assumptions on the environment. St...
Abstract. We address the problem of model-based reinforcement learning in in-finite state spaces. On...
International audienceThe problem of reinforcement learning in an unknown and discrete Markov Decisi...
Any reinforcement learning algorithm that applies to all Markov decision processes (MDPs) will suer ...
In this paper, we revisit the regret of undiscounted reinforcement learning in MDPs with a birth and...
We provide an algorithm that achieves the optimal regret rate in an unknown weakly communicating Mar...
Sequential decision making from experience, or reinforcement learning (RL), is a paradigm that is we...
We consider a class of sequential decision making problems in the presence of uncertainty, which bel...