We consider an MDP setting in which the reward function is allowed to change during each time step of play (possibly in an adversarial manner), yet the dynamics remain fixed. Similar to the experts setting, we address the question of how well can an agent do when compared to the reward achieved under the best stationary policy over time. We provide efficient algorithms, which have regret bounds with no dependence on the size of state space. Instead, these bounds depend only on a certain horizon time of the process and logarithmically on the number of actions. We also show that in the case that the dynamics change over time, the problem becomes computationally hard
International audienceThe problem of reinforcement learning in an unknown and discrete Markov Decisi...
The Configurable Markov Decision Process framework includes two entities: a Reinforcement Learning a...
Sequential decision making is a fundamental task faced by any intelligent agent in an extended inter...
We consider an MDP setting in which the reward function is allowed to change during each time step o...
We consider an MDP setting in which the reward function is allowed to change during each time step o...
International audienceWe consider an agent interacting with an environment in a single stream of act...
Abstract. We consider a control problem where the decision maker in-teracts with a standard Markov d...
We consider the problem of learning a policy for a Markov decision process consistent with data capt...
International audienceWe consider the problem of online reinforcement learning when several state re...
We consider an agent interacting with an environment in a single stream of actions, observations, an...
In this work we present a multi-armed bandit framework for online expert selection in Markov decisio...
We study online reinforcement learning in linear Markov decision processes with adversarial losses a...
General purpose intelligent learning agents cycle through (complex,non-MDP) sequences of observatio...
International audienceWe consider online learning in finite stochastic Markovian environments where ...
Any reinforcement learning algorithm that applies to all Markov decision processes (MDPs) will suer ...
International audienceThe problem of reinforcement learning in an unknown and discrete Markov Decisi...
The Configurable Markov Decision Process framework includes two entities: a Reinforcement Learning a...
Sequential decision making is a fundamental task faced by any intelligent agent in an extended inter...
We consider an MDP setting in which the reward function is allowed to change during each time step o...
We consider an MDP setting in which the reward function is allowed to change during each time step o...
International audienceWe consider an agent interacting with an environment in a single stream of act...
Abstract. We consider a control problem where the decision maker in-teracts with a standard Markov d...
We consider the problem of learning a policy for a Markov decision process consistent with data capt...
International audienceWe consider the problem of online reinforcement learning when several state re...
We consider an agent interacting with an environment in a single stream of actions, observations, an...
In this work we present a multi-armed bandit framework for online expert selection in Markov decisio...
We study online reinforcement learning in linear Markov decision processes with adversarial losses a...
General purpose intelligent learning agents cycle through (complex,non-MDP) sequences of observatio...
International audienceWe consider online learning in finite stochastic Markovian environments where ...
Any reinforcement learning algorithm that applies to all Markov decision processes (MDPs) will suer ...
International audienceThe problem of reinforcement learning in an unknown and discrete Markov Decisi...
The Configurable Markov Decision Process framework includes two entities: a Reinforcement Learning a...
Sequential decision making is a fundamental task faced by any intelligent agent in an extended inter...