International audienceWe study the problem of learning in the stochastic shortest path (SSP) setting, where an agent seeks to minimize the expected cost accumulated before reaching a goal state. We design a novel model-based algorithm EB-SSP that carefully skews the empirical transitions and perturbs the empirical costs with an exploration bonus to induce an optimistic SSP problem whose associated value iteration scheme is guaranteed to converge. We prove that EB-SSP achieves the minimax regret rate O(B* √ SAK), where K is the number of episodes, S is the number of states, A is the number of actions, and B* bounds the expected cumulative cost of the optimal policy from any state, thus closing the gap with the lower bound. Interestingly, EB-...
The parameters for a Markov Decision Process (MDP) often cannot be specified exactly. Uncertain MDPs...
The parameters for a Markov Decision Process (MDP) often cannot be specified exactly. Uncertain MDPs...
The parameters for a Markov Decision Process (MDP) often cannot be specified exactly. Uncertain MDPs...
International audienceWe study the problem of learning in the stochastic shortest path (SSP) setting...
International audienceMany popular reinforcement learning problems (e.g., navigation in a maze, some...
Goal-oriented Reinforcement Learning, where the agent needs to reach the goal state while simultaneo...
International audienceWe consider the objective of computing an ε-optimal policy in a stochastic sho...
We propose an algorithm that uses linear function approximation (LFA) for stochastic shortest path (...
In this invited contribution, we revisit the stochastic shortest path problem, and show how recent r...
Stochastic Shortest Path Problems (SSPs) are a common representation for probabilistic planning prob...
A stochastic shortest path problem is an undiscounted infinite-horizon Markov decision process with ...
The stochastic shortest path problem lies at the heart of many questions in the formal verification ...
Two extreme approaches can be applied to solve a probabilistic planning problem, namely closed loop ...
Fully observable decision-theoretic planning problems are commonly modeled as stochastic shortest pa...
In this paper, we consider planning in stochastic shortest path problems, a subclass of Markov Decis...
The parameters for a Markov Decision Process (MDP) often cannot be specified exactly. Uncertain MDPs...
The parameters for a Markov Decision Process (MDP) often cannot be specified exactly. Uncertain MDPs...
The parameters for a Markov Decision Process (MDP) often cannot be specified exactly. Uncertain MDPs...
International audienceWe study the problem of learning in the stochastic shortest path (SSP) setting...
International audienceMany popular reinforcement learning problems (e.g., navigation in a maze, some...
Goal-oriented Reinforcement Learning, where the agent needs to reach the goal state while simultaneo...
International audienceWe consider the objective of computing an ε-optimal policy in a stochastic sho...
We propose an algorithm that uses linear function approximation (LFA) for stochastic shortest path (...
In this invited contribution, we revisit the stochastic shortest path problem, and show how recent r...
Stochastic Shortest Path Problems (SSPs) are a common representation for probabilistic planning prob...
A stochastic shortest path problem is an undiscounted infinite-horizon Markov decision process with ...
The stochastic shortest path problem lies at the heart of many questions in the formal verification ...
Two extreme approaches can be applied to solve a probabilistic planning problem, namely closed loop ...
Fully observable decision-theoretic planning problems are commonly modeled as stochastic shortest pa...
In this paper, we consider planning in stochastic shortest path problems, a subclass of Markov Decis...
The parameters for a Markov Decision Process (MDP) often cannot be specified exactly. Uncertain MDPs...
The parameters for a Markov Decision Process (MDP) often cannot be specified exactly. Uncertain MDPs...
The parameters for a Markov Decision Process (MDP) often cannot be specified exactly. Uncertain MDPs...