We study the problem of achieving a given value in Markov decision processes (MDPs) with several independent discounted reward objectives. We consider a generalised version of discounted reward objectives, in which the amount of discounting depends on the states visited and on the objective. This definition extends the usual definition of discounted reward, and allows to capture the systems in which the value of different commodities diminish at different and variable rates. We establish results for two prominent subclasses of the problem, namely state-discount models where the discount factors are only dependent on the state of the MDP (and independent of the objective), and reward-discount models where they are only dependent on the obje...
We consider multistage decision processes where criterion function is an expectation of minimum func...
The running time of the classical algorithms of the Markov Decision Process (MDP) typically grows li...
This paper considers Markov decision processes (MDPs) with unbounded rates, as a function of state. ...
We consider a discrete time Markov Decision Process with infinite horizon. The criterion to be maxim...
In this paper we consider a constrained optimization of discrete time Markov Decision Processes (MDP...
We study Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) functions. We...
For semi-Markov decision processes with discounted rewards we derive the well known results regardin...
AbstractThis paper studies the minimizing risk problems in Markov decision processes with countable ...
Canonical models of Markov decision processes (MDPs) usually consider geometric discounting based on...
The paper gives a survey on solution techniques for Markov decision processes with respect to the to...
A Markov decision process (MDP) relies on the notions of state, describing the current situation of ...
We consider Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) objectives...
Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with uncertainty in...
Markov decision processes (MDPs) are controllable dis-crete event systems with stochastic transition...
summary:In this paper there are considered Markov decision processes (MDPs) that have the discounted...
We consider multistage decision processes where criterion function is an expectation of minimum func...
The running time of the classical algorithms of the Markov Decision Process (MDP) typically grows li...
This paper considers Markov decision processes (MDPs) with unbounded rates, as a function of state. ...
We consider a discrete time Markov Decision Process with infinite horizon. The criterion to be maxim...
In this paper we consider a constrained optimization of discrete time Markov Decision Processes (MDP...
We study Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) functions. We...
For semi-Markov decision processes with discounted rewards we derive the well known results regardin...
AbstractThis paper studies the minimizing risk problems in Markov decision processes with countable ...
Canonical models of Markov decision processes (MDPs) usually consider geometric discounting based on...
The paper gives a survey on solution techniques for Markov decision processes with respect to the to...
A Markov decision process (MDP) relies on the notions of state, describing the current situation of ...
We consider Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) objectives...
Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with uncertainty in...
Markov decision processes (MDPs) are controllable dis-crete event systems with stochastic transition...
summary:In this paper there are considered Markov decision processes (MDPs) that have the discounted...
We consider multistage decision processes where criterion function is an expectation of minimum func...
The running time of the classical algorithms of the Markov Decision Process (MDP) typically grows li...
This paper considers Markov decision processes (MDPs) with unbounded rates, as a function of state. ...