Markov decision process (MDP) is a decision making framework where a decision maker is interested in maximizing the expected discounted value of a stream of rewards received at future stages at various states which are visited according to a controlled Markov chain. Many algorithms including linear programming methods are available in the literature to compute an optimal policy when the rewards and transition probabilities are deterministic. In this paper, we consider an MDP problem where the transition probabilities are known and the reward vector is a random vector whose distribution is partially known. We formulate the MDP problem using distributionally robust chance-constrained optimization framework under various types of moments based...
Stochastic programming can effectively describe many decision making problems in uncertain environme...
We study stochastic optimization problems with chance and risk constraints, where in the latter, ris...
In this paper, we seek robust policies for uncertain Markov Decision Processes (MDPs). Most robust o...
We consider Markov decision processes where the values of the parameters are uncertain. This uncerta...
This paper considers the distributionally robust chance constrained Markov decision process with ran...
Markov decision processes (MDPs) are powerful tools for decision making in uncertain dynamic environ...
Chance-constrained optimization is a powerful mathematical framework that addresses decision-making ...
Abstract This paper investigates the computational aspects of distributionally ro-bust chance constr...
Markov decision processes (MDP) is a standard modeling tool for sequential decision making in a dyna...
Markov Decision Problems, MDPs offer an effective mechanism for planning under uncertainty. However,...
Distributionally robust optimization (DRO) is a modeling framework in decision making under uncertai...
Markov Decision Problems, MDPs offer an effective mechanism for planning under uncertainty. However,...
Markov Decision Problems, MDPs offer an effective mech-anism for planning under uncertainty. However...
We introduce a new class of distributionally robust optimization problems under decision-dependent a...
A wide variety of decision problems in engineering, science and economics involve uncertain paramete...
Stochastic programming can effectively describe many decision making problems in uncertain environme...
We study stochastic optimization problems with chance and risk constraints, where in the latter, ris...
In this paper, we seek robust policies for uncertain Markov Decision Processes (MDPs). Most robust o...
We consider Markov decision processes where the values of the parameters are uncertain. This uncerta...
This paper considers the distributionally robust chance constrained Markov decision process with ran...
Markov decision processes (MDPs) are powerful tools for decision making in uncertain dynamic environ...
Chance-constrained optimization is a powerful mathematical framework that addresses decision-making ...
Abstract This paper investigates the computational aspects of distributionally ro-bust chance constr...
Markov decision processes (MDP) is a standard modeling tool for sequential decision making in a dyna...
Markov Decision Problems, MDPs offer an effective mechanism for planning under uncertainty. However,...
Distributionally robust optimization (DRO) is a modeling framework in decision making under uncertai...
Markov Decision Problems, MDPs offer an effective mechanism for planning under uncertainty. However,...
Markov Decision Problems, MDPs offer an effective mech-anism for planning under uncertainty. However...
We introduce a new class of distributionally robust optimization problems under decision-dependent a...
A wide variety of decision problems in engineering, science and economics involve uncertain paramete...
Stochastic programming can effectively describe many decision making problems in uncertain environme...
We study stochastic optimization problems with chance and risk constraints, where in the latter, ris...
In this paper, we seek robust policies for uncertain Markov Decision Processes (MDPs). Most robust o...