In this paper we study a Markov decision process with a non-linear discount function. Our approach is in spirit of the von Neumann-Morgenstern concept and is based on the notion of expectation. First, we define a utility on the space of trajectories of the process in the finite and infinite time horizon and then take their expected values. It turns out that the associated optimization problem leads to a non-stationary dynamic programming and an infinite system of Bellman equations, which result in obtaining persistently optimal policies. Our theory is enriched by examples
AbstractThis paper deals with the average expected reward criterion for continuous-time Markov decis...
Abstract. We study the existence of optimal strategies and value func-tion of non stationary Markov ...
AbstractThe following optimality principle is established for finite undiscounted or discounted Mark...
In this paper we study a Markov decision process with a non-linear discount function. Our approach ...
In this paper, we study a Markov decision process with a non-linear discount function and with a Bor...
In this paper, we study a Markov decision process with a non-linear discount function and with a Bor...
This paper generalizes the classical discounted utility model introduced by Samuelson by replacing a...
We study the existence of optimal strategies and value function of non stationary Markov decision pr...
AbstractIn this paper, we study discounted Markov decision processes on an uncountable state space. ...
AbstractFor countable-state decision processes (dynamic programming problems), a general class of ob...
summary:This work analyzes a discrete-time Markov Control Model (MCM) on Borel spaces when the perfo...
This paper provides general techniques for the characterization of optimal plans resulting from stoc...
In this paper, we apply the idea of $k$-local contraction of \cite{zec, zet} to study discounted st...
AbstractContinuous time Markovian decision models with countable state space are investigated. The e...
summary:This paper is related to Markov Decision Processes. The optimal control problem is to minimi...
AbstractThis paper deals with the average expected reward criterion for continuous-time Markov decis...
Abstract. We study the existence of optimal strategies and value func-tion of non stationary Markov ...
AbstractThe following optimality principle is established for finite undiscounted or discounted Mark...
In this paper we study a Markov decision process with a non-linear discount function. Our approach ...
In this paper, we study a Markov decision process with a non-linear discount function and with a Bor...
In this paper, we study a Markov decision process with a non-linear discount function and with a Bor...
This paper generalizes the classical discounted utility model introduced by Samuelson by replacing a...
We study the existence of optimal strategies and value function of non stationary Markov decision pr...
AbstractIn this paper, we study discounted Markov decision processes on an uncountable state space. ...
AbstractFor countable-state decision processes (dynamic programming problems), a general class of ob...
summary:This work analyzes a discrete-time Markov Control Model (MCM) on Borel spaces when the perfo...
This paper provides general techniques for the characterization of optimal plans resulting from stoc...
In this paper, we apply the idea of $k$-local contraction of \cite{zec, zet} to study discounted st...
AbstractContinuous time Markovian decision models with countable state space are investigated. The e...
summary:This paper is related to Markov Decision Processes. The optimal control problem is to minimi...
AbstractThis paper deals with the average expected reward criterion for continuous-time Markov decis...
Abstract. We study the existence of optimal strategies and value func-tion of non stationary Markov ...
AbstractThe following optimality principle is established for finite undiscounted or discounted Mark...