Considered are semi-Markov decision processes (SMDPs) with finite state and action spaces. We study two criteria: the expected average reward per unit time subject to a sample path constraint on the average cost per unit time and the expected time-average variability. Under a certain condition, for communicating SMDPs, we construct (randomized) stationary policies that are ε-optimal for each criterion; the policy is optimal for the first criterion under the unichain assumption and the policy is optimal and pure for a specific variability function in the second criterion. For general multichain SMDPs, by using a state space decomposition approach, similar results are obtained. © 2007 Cambridge University Press
We consider multistage decision processes where criterion function is an expectation of minimum func...
A Markov decision process (MDP) relies on the notions of state, describing the current situation of ...
Abstract. Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with unce...
Time-average Markov decision problems are considered for the finite state and action spaces. Several...
Time-average Markov decision problems are considered for the finite state and action spaces. Several...
In this note, we consider semi-Markov decision processes with finite state and general multichain st...
We consider a semi-Markov decision process with arbitrary action space; the state space is the nonne...
Semi-Markov decision processes can be considered as an extension of discrete- and continuous-time M...
We shall be concerned with the optimization problem of semi-Markov decision processes with countable...
summary:This paper deals with a first passage mean-variance problem for semi-Markov decision process...
summary:This paper deals with a first passage mean-variance problem for semi-Markov decision process...
summary:This paper deals with a first passage mean-variance problem for semi-Markov decision process...
AbstractThis paper presents a new model: the mixed Markov decision process (MDP) in a semi-Markov en...
AbstractWe consider a Markov decision process with an uncountable state space for which the vector p...
Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with uncertainty in...
We consider multistage decision processes where criterion function is an expectation of minimum func...
A Markov decision process (MDP) relies on the notions of state, describing the current situation of ...
Abstract. Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with unce...
Time-average Markov decision problems are considered for the finite state and action spaces. Several...
Time-average Markov decision problems are considered for the finite state and action spaces. Several...
In this note, we consider semi-Markov decision processes with finite state and general multichain st...
We consider a semi-Markov decision process with arbitrary action space; the state space is the nonne...
Semi-Markov decision processes can be considered as an extension of discrete- and continuous-time M...
We shall be concerned with the optimization problem of semi-Markov decision processes with countable...
summary:This paper deals with a first passage mean-variance problem for semi-Markov decision process...
summary:This paper deals with a first passage mean-variance problem for semi-Markov decision process...
summary:This paper deals with a first passage mean-variance problem for semi-Markov decision process...
AbstractThis paper presents a new model: the mixed Markov decision process (MDP) in a semi-Markov en...
AbstractWe consider a Markov decision process with an uncountable state space for which the vector p...
Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with uncertainty in...
We consider multistage decision processes where criterion function is an expectation of minimum func...
A Markov decision process (MDP) relies on the notions of state, describing the current situation of ...
Abstract. Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with unce...