We consider the problem of approximating the values and the optimal policies in risk-averse discounted Markov Decision Processes with infinite horizon. We study the properties of the rolling horizon and the approximate rolling horizon procedures, proving bounds which imply the convergence of the procedures when the horizon length tends to infinity. We also analyze the effects of uncertainties on the transition probabilities, the cost functions and the discount factors.Nous considérons le problème de l'approximation de la fonction de valeur et des politiques optimales dans un processus de décision Markovien avec actualisation et aversion au risque. Nous étudions les propriétés de la procédure de l'horizon roulant et son approximation, et mon...
AbstractThis paper studies the convergence of value-iteration functions and the existence of error b...
This paper considers Markov decision processes (MDPs) with unbounded rates, as a function of state. ...
AbstractConsiderable numerical experience indicates that the standard value iteration procedure for ...
We consider the problem of approximating the values and the optimal policies in risk-averse discount...
We study the properties of the rolling horizon and the approximate rolling horizon procedures for th...
International audienceWe study the behaviour of the rolling horizon procedure for the case of two-pe...
AbstractWe consider an approximation scheme for solving Markov decision processes (MDPs) with counta...
In this work, we deal with a discrete-time infinite horizon Markov decision process with locally com...
We investigate the problem of minimizing the Average-Value-at-Risk (AV aRr) of the discounted cost o...
summary:This paper is related to Markov Decision Processes. The optimal control problem is to minimi...
We study the behavior of the rolling horizon procedure for semi-Markov decision processes, with infi...
The aim of this paper is to give an overview of recent developments in the area of successive approx...
This paper is concerned with the links between the Value Iteration algorithm and the Rolling Horizon...
In this paper we formulate Markov Decision Processes with Random Horizon. We show the optimality equ...
Building on the receding horizon approach by Hernandez-Lerma andLasserre in solving Markov decision ...
AbstractThis paper studies the convergence of value-iteration functions and the existence of error b...
This paper considers Markov decision processes (MDPs) with unbounded rates, as a function of state. ...
AbstractConsiderable numerical experience indicates that the standard value iteration procedure for ...
We consider the problem of approximating the values and the optimal policies in risk-averse discount...
We study the properties of the rolling horizon and the approximate rolling horizon procedures for th...
International audienceWe study the behaviour of the rolling horizon procedure for the case of two-pe...
AbstractWe consider an approximation scheme for solving Markov decision processes (MDPs) with counta...
In this work, we deal with a discrete-time infinite horizon Markov decision process with locally com...
We investigate the problem of minimizing the Average-Value-at-Risk (AV aRr) of the discounted cost o...
summary:This paper is related to Markov Decision Processes. The optimal control problem is to minimi...
We study the behavior of the rolling horizon procedure for semi-Markov decision processes, with infi...
The aim of this paper is to give an overview of recent developments in the area of successive approx...
This paper is concerned with the links between the Value Iteration algorithm and the Rolling Horizon...
In this paper we formulate Markov Decision Processes with Random Horizon. We show the optimality equ...
Building on the receding horizon approach by Hernandez-Lerma andLasserre in solving Markov decision ...
AbstractThis paper studies the convergence of value-iteration functions and the existence of error b...
This paper considers Markov decision processes (MDPs) with unbounded rates, as a function of state. ...
AbstractConsiderable numerical experience indicates that the standard value iteration procedure for ...