We explore formal approximation techniques for Markov chains based on state–space reduction that aim at improving the scalability of the analysis, while providing formal bounds on the approximation error. We first present a comprehensive survey of existing state-reduction techniques based on clustering or truncation. Then, we extend existing frameworks for aggregation-based analysis of Markov chains by allowing them to handle chains with an arbitrary structure of the underlying state space – including continuous-time models – and improve upon existing bounds on the approximation error. Finally, we introduce a new hybrid scheme that utilises both aggregation and truncation of the state space and provides the best availab...
AbstractLabelled Markov processes are probabilistic versions of labelled transition systems. In gene...
We consider a simple and widely used method for evaluating quasi-stationary distributions of continu...
Abstract Computing the stationary distributions of a continuous-time Markov chain (CTMC) involves s...
In this work we discuss approximative techniques for the analysis of Markov chains, namely, state sp...
Numerical methods for solving Markov chains are in general inefficient if the state space of the cha...
The goal of this work is to formally abstract a Markov process evolving in discrete time over a gene...
This paper proposes a measure-theoretic reconstruction of the approximation schemes developed for La...
In this thesis, the theory of lumpability (strong lumpability and weak lumpability) of irreducible f...
Solving Markov chains is, in general, difficult if the state space of the chain is very large (or in...
Abstract: Two main approximation methods for steady-state analysis of Markov chains are introduced: ...
This work focuses on the computation of the steady state distribution of a Markov chain, making use ...
Markov chains are frequently used to model complex stochastic systems. Unfortunately the state space...
International audienceMarkov chain modeling often suffers from the curse of dimensionality problems ...
If the state space of a homogeneous continuous-time Markov chain is too large, making inferences bec...
190 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2005.Markovian modeling of systems...
AbstractLabelled Markov processes are probabilistic versions of labelled transition systems. In gene...
We consider a simple and widely used method for evaluating quasi-stationary distributions of continu...
Abstract Computing the stationary distributions of a continuous-time Markov chain (CTMC) involves s...
In this work we discuss approximative techniques for the analysis of Markov chains, namely, state sp...
Numerical methods for solving Markov chains are in general inefficient if the state space of the cha...
The goal of this work is to formally abstract a Markov process evolving in discrete time over a gene...
This paper proposes a measure-theoretic reconstruction of the approximation schemes developed for La...
In this thesis, the theory of lumpability (strong lumpability and weak lumpability) of irreducible f...
Solving Markov chains is, in general, difficult if the state space of the chain is very large (or in...
Abstract: Two main approximation methods for steady-state analysis of Markov chains are introduced: ...
This work focuses on the computation of the steady state distribution of a Markov chain, making use ...
Markov chains are frequently used to model complex stochastic systems. Unfortunately the state space...
International audienceMarkov chain modeling often suffers from the curse of dimensionality problems ...
If the state space of a homogeneous continuous-time Markov chain is too large, making inferences bec...
190 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2005.Markovian modeling of systems...
AbstractLabelled Markov processes are probabilistic versions of labelled transition systems. In gene...
We consider a simple and widely used method for evaluating quasi-stationary distributions of continu...
Abstract Computing the stationary distributions of a continuous-time Markov chain (CTMC) involves s...