International audienceWe consider a discrete-time Markov decision process with Borel state and action spaces, and possibly unbounded cost function. We assume that the Markov transition kernel is absolutely continuous with respect to some probability measure . By replacing this probability measure with its empirical distribution for a sample of size n, we obtain a finite state space control problem, which is used to provide an approximation of the optimal value and an optimal policy of the original control model. We impose Lipschitz continuity properties on the control model and its associated density functions. We measure the accuracy of the approximation of the optimal value and an optimal policy by means of a non-asymptotic concentration...
International audienceThis paper deals with a continuous-time Markov decision process M, with Borel ...
For general state and action space Markov decision processes, we present sufficient conditions for t...
This paper deals with a continuous-time Markov decision process M, with Borel state and action space...
We consider a discrete-time Markov decision process with Borel state and action spaces, and possibly...
International audienceIn this paper, we propose an approach for approximating the value function and...
In this paper, we propose an approach for approximating the value function and an ϵ-optimal policy o...
In this paper we study the numerical approximation of the optimal long-run average cost of a continu...
International audienceIn this paper we study the numerical approximation of the optimal long-run ave...
summary:We consider a class of discrete-time Markov control processes with Borel state and action sp...
In this work, we deal with a discrete-time infinite horizon Markov decision process with locally com...
In this work, we deal with a discrete-time infinite horizon Markov decision process with locally com...
Due to copyright restrictions, the access to the full text of this article is only available via sub...
International audienceThis paper deals with a continuous-time Markov decision process M, with Borel ...
This paper deals with discrete-time Markov Decision Processes (MDP's) under constraints where all th...
International audienceThis paper deals with discrete-time Markov Decision Processes (MDP's) under co...
International audienceThis paper deals with a continuous-time Markov decision process M, with Borel ...
For general state and action space Markov decision processes, we present sufficient conditions for t...
This paper deals with a continuous-time Markov decision process M, with Borel state and action space...
We consider a discrete-time Markov decision process with Borel state and action spaces, and possibly...
International audienceIn this paper, we propose an approach for approximating the value function and...
In this paper, we propose an approach for approximating the value function and an ϵ-optimal policy o...
In this paper we study the numerical approximation of the optimal long-run average cost of a continu...
International audienceIn this paper we study the numerical approximation of the optimal long-run ave...
summary:We consider a class of discrete-time Markov control processes with Borel state and action sp...
In this work, we deal with a discrete-time infinite horizon Markov decision process with locally com...
In this work, we deal with a discrete-time infinite horizon Markov decision process with locally com...
Due to copyright restrictions, the access to the full text of this article is only available via sub...
International audienceThis paper deals with a continuous-time Markov decision process M, with Borel ...
This paper deals with discrete-time Markov Decision Processes (MDP's) under constraints where all th...
International audienceThis paper deals with discrete-time Markov Decision Processes (MDP's) under co...
International audienceThis paper deals with a continuous-time Markov decision process M, with Borel ...
For general state and action space Markov decision processes, we present sufficient conditions for t...
This paper deals with a continuous-time Markov decision process M, with Borel state and action space...