We study countably infinite Markov decision processes (MDPs) with real-valuedtransition rewards. Every infinite run induces the following sequences ofpayoffs: 1. Point payoff (the sequence of directly seen transition rewards), 2.Mean payoff (the sequence of the sums of all rewards so far, divided by thenumber of steps), and 3. Total payoff (the sequence of the sums of all rewardsso far). For each payoff type, the objective is to maximize the probabilitythat the $\liminf$ is non-negative. We establish the complete picture of thestrategy complexity of these objectives, i.e., how much memory is necessary andsufficient for $\varepsilon$-optimal (resp. optimal) strategies. Some cases canbe won with memoryless deterministic strategies, while othe...
The Transience objective is not to visit any state infinitely often. While this is not possible in a...
We study countably infinite MDPs with parity objectives, and special cases with a bounded number of ...
We study countably infinite MDPs with parity objectives, and special cases with a bounded number of ...
We study countably infinite Markov decision processes (MDPs) with real-valued transition rewards. Ev...
We study countably infinite Markov decision processes (MDPs) with real-valued transition rewards. Ev...
We study Markov decision processes (MDPs) with a countably infinite number of states and transitions...
We study Markov decision processes (MDPs) with a countably infinite number of states and transitions...
We study countably infinite Markov decision processes with Büchi objectives, which ask to visit a gi...
We study Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) functions. We...
We consider Markov decision processes (MDPs) with multiple limit-average (ormean-payoff) objectives....
We study countably infinite Markov decision processes with B\"uchi objectives, which ask to visit a ...
We study countably infinite MDPs with parity objectives. Unlike in finite MDPs, optimal strategies n...
We study countably infinite MDPs with parity objectives. Unlike in finite MDPs, optimal strategies n...
We study countably infinite MDPs with parity objectives. Unlike in finite MDPs, optimal strategies n...
Markov decision processes (MDPs) are a standard model for dynamic systems that exhibit both stochast...
The Transience objective is not to visit any state infinitely often. While this is not possible in a...
We study countably infinite MDPs with parity objectives, and special cases with a bounded number of ...
We study countably infinite MDPs with parity objectives, and special cases with a bounded number of ...
We study countably infinite Markov decision processes (MDPs) with real-valued transition rewards. Ev...
We study countably infinite Markov decision processes (MDPs) with real-valued transition rewards. Ev...
We study Markov decision processes (MDPs) with a countably infinite number of states and transitions...
We study Markov decision processes (MDPs) with a countably infinite number of states and transitions...
We study countably infinite Markov decision processes with Büchi objectives, which ask to visit a gi...
We study Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) functions. We...
We consider Markov decision processes (MDPs) with multiple limit-average (ormean-payoff) objectives....
We study countably infinite Markov decision processes with B\"uchi objectives, which ask to visit a ...
We study countably infinite MDPs with parity objectives. Unlike in finite MDPs, optimal strategies n...
We study countably infinite MDPs with parity objectives. Unlike in finite MDPs, optimal strategies n...
We study countably infinite MDPs with parity objectives. Unlike in finite MDPs, optimal strategies n...
Markov decision processes (MDPs) are a standard model for dynamic systems that exhibit both stochast...
The Transience objective is not to visit any state infinitely often. While this is not possible in a...
We study countably infinite MDPs with parity objectives, and special cases with a bounded number of ...
We study countably infinite MDPs with parity objectives, and special cases with a bounded number of ...