This paper provides a unified framework to study monotone optimal control for a class of Markov decision processes through D-multimodularity. We demonstrate that each system in this class can be classified as either a substitution-type or a complement-type system according to the possible transition set, which can be used as a classification mechanism that integrates a variety of models in the literature. We develop a generic proof of the structural properties of both types of system. In particular, we show that D-multimodularity is a generally sufficient condition for monotone optimal control of different types of system in this class. With this unified theory, there is no need to pursue each problem ad hoc and the structural properties of...
International audienceWe consider a class of Markov Decision Processes frequently employed to model ...
This paper studies discrete-time multiobjective Markov control processes (MCPs) on Borel spaces and ...
We introduce a class of MPDs which greatly simplify Reinforcement Learning. They have discrete state...
This paper provides a unified framework to study monotone optimal control for a class of Markov deci...
summary:Firstly, in this paper there is considered a certain class of possibly unbounded optimizatio...
We introduce the concept of multimodularity into the class of stochastic dynamic programs in which s...
summary:In this paper there are considered Markov decision processes (MDPs) that have the discounted...
Structural properties of stochastic dynamic programs are essential to understanding the nature of th...
This dissertation studies monotone optimal control for a class of discrete-event stochastic systems ...
Structural properties of stochastic dynamic programs are essential to understanding the nature of th...
This paper introduces a formulation of the mixed risk-neutral/minimax control problem for Markov Dec...
A stochastic matrix is "monotone" [4] if its row-vectors are stochastically increasing. Closure prop...
The purpose of the paper is to present a complete theory of optimal control of piecewise linear and ...
Many sequential decision problems can be formulated as Markov decision processes (MDPs) where the op...
This note considers finite state and action spaces controlled Markov chains with multiple costs. The...
International audienceWe consider a class of Markov Decision Processes frequently employed to model ...
This paper studies discrete-time multiobjective Markov control processes (MCPs) on Borel spaces and ...
We introduce a class of MPDs which greatly simplify Reinforcement Learning. They have discrete state...
This paper provides a unified framework to study monotone optimal control for a class of Markov deci...
summary:Firstly, in this paper there is considered a certain class of possibly unbounded optimizatio...
We introduce the concept of multimodularity into the class of stochastic dynamic programs in which s...
summary:In this paper there are considered Markov decision processes (MDPs) that have the discounted...
Structural properties of stochastic dynamic programs are essential to understanding the nature of th...
This dissertation studies monotone optimal control for a class of discrete-event stochastic systems ...
Structural properties of stochastic dynamic programs are essential to understanding the nature of th...
This paper introduces a formulation of the mixed risk-neutral/minimax control problem for Markov Dec...
A stochastic matrix is "monotone" [4] if its row-vectors are stochastically increasing. Closure prop...
The purpose of the paper is to present a complete theory of optimal control of piecewise linear and ...
Many sequential decision problems can be formulated as Markov decision processes (MDPs) where the op...
This note considers finite state and action spaces controlled Markov chains with multiple costs. The...
International audienceWe consider a class of Markov Decision Processes frequently employed to model ...
This paper studies discrete-time multiobjective Markov control processes (MCPs) on Borel spaces and ...
We introduce a class of MPDs which greatly simplify Reinforcement Learning. They have discrete state...