We study the computational complexity of central analysis problems for One-Counter Markov Decision Processes (OC-MDPs), a class of finitely-presented, countable-state MDPs. OC-MDPs are equivalent to a controlled extension of (discrete-time) Quasi-Birth-Death processes (QBDs), a stochastic model studied heavily in queueing theory and applied probability. They can thus be viewed as a natural ``adversarial'' version of a classic stochastic model. Alternatively, they can also be viewed as a natural probabilistic/controlled extension of classic one-counter automata. OC-MDPs also subsume (as a very restricted special case) a recently studied MDP model called ``solvency games'' that model a risk-averse gambling scenario. Basic computational questi...
In this paper we present a novel abstraction technique for Markov decision processes (MDPs), which a...
Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI r...
We study countably infinite Markov decision processes (MDPs) with real-valued transition rewards. Ev...
We study the computational complexity of central analysis problems for One-Counter Markov Decision P...
We study the computational complexity of basic decision problems for one-counter simple stochastic g...
One-counter MDPs (OC-MDPs) and one-counter simple stochastic games (OC-SSGs) are 1-player, and 2-pla...
Abstract. We consider the problem of computing the value and an optimal strat-egy for minimizing the...
Abstract. Markov decision processes (MDP) are finite-state systems with both strategic and probabili...
Markov decision processes (MDP) are finite-state systems with both strategic and probabilistic choic...
We consider decentralized control of Markov decision processes and give complexity bounds on the wor...
We consider a class of infinite-state Markov decision processes generated by stateless pushdown auto...
AbstractWe consider a class of infinite-state Markov decision processes generated by stateless pushd...
Markov decision processes (MDPs) are models of dynamic decision making under uncertainty. These mode...
Markov decision processes (MDPs) are finite-state probabilistic systems with both strategic and rand...
The value 1 problem is a natural decision problem in algorithmic game theory. For partially observab...
In this paper we present a novel abstraction technique for Markov decision processes (MDPs), which a...
Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI r...
We study countably infinite Markov decision processes (MDPs) with real-valued transition rewards. Ev...
We study the computational complexity of central analysis problems for One-Counter Markov Decision P...
We study the computational complexity of basic decision problems for one-counter simple stochastic g...
One-counter MDPs (OC-MDPs) and one-counter simple stochastic games (OC-SSGs) are 1-player, and 2-pla...
Abstract. We consider the problem of computing the value and an optimal strat-egy for minimizing the...
Abstract. Markov decision processes (MDP) are finite-state systems with both strategic and probabili...
Markov decision processes (MDP) are finite-state systems with both strategic and probabilistic choic...
We consider decentralized control of Markov decision processes and give complexity bounds on the wor...
We consider a class of infinite-state Markov decision processes generated by stateless pushdown auto...
AbstractWe consider a class of infinite-state Markov decision processes generated by stateless pushd...
Markov decision processes (MDPs) are models of dynamic decision making under uncertainty. These mode...
Markov decision processes (MDPs) are finite-state probabilistic systems with both strategic and rand...
The value 1 problem is a natural decision problem in algorithmic game theory. For partially observab...
In this paper we present a novel abstraction technique for Markov decision processes (MDPs), which a...
Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI r...
We study countably infinite Markov decision processes (MDPs) with real-valued transition rewards. Ev...