We prove that the optimal lumping quotient of a finite Markov chain can be constructed in O(mlgn) time, where n is the number of states and m is the number of transitions. Our proof relies on the use of splay trees (designed by Sleator and Tarjan [J. ACM 32 (3) (1985) 652–686]) to sort transition weights
Numerical methods for solving Markov chains are in general inefficient if the state space of the cha...
AbstractForming lumped states in a Markov chain is a very useful device leading to a coarser level o...
Solving Markov chains is, in general, difficult if the state space of the chain is very large (or in...
In 2003, Derisavi, Hermanns, and Sanders presented a complicated O(m log n) time algorithm for the M...
In this thesis, the theory of lumpability (strong lumpability and weak lumpability) of irreducible f...
AbstractThis paper shows how lumping in Markov chains can be extended to Markov set-chains. The crit...
A class of Markov chains we call successively lumpable is specified for which it is shown that the s...
In this paper we reason about the notion of proportional lumpability, that generalizes the original ...
An irreducible and homogeneous Markov chain with finite state space is considered. Under a mild cond...
State space lumping is one of the classical means to fight the state space explosion problem in stat...
This paper extends Markov chain bootstrapping to the case of multivariate continuous-valued stochas...
Markov chains are an essential tool for sampling from large sets, and are ubiquitous across many sci...
International audienceMarkov chains can accurately model the state-to-state dynamics of a wide range...
In this paper, we face a generalization of the problem of finding the distribution of how long it ta...
We face a generalization of the problem of finding the distribution of how long it takes to reach a ...
Numerical methods for solving Markov chains are in general inefficient if the state space of the cha...
AbstractForming lumped states in a Markov chain is a very useful device leading to a coarser level o...
Solving Markov chains is, in general, difficult if the state space of the chain is very large (or in...
In 2003, Derisavi, Hermanns, and Sanders presented a complicated O(m log n) time algorithm for the M...
In this thesis, the theory of lumpability (strong lumpability and weak lumpability) of irreducible f...
AbstractThis paper shows how lumping in Markov chains can be extended to Markov set-chains. The crit...
A class of Markov chains we call successively lumpable is specified for which it is shown that the s...
In this paper we reason about the notion of proportional lumpability, that generalizes the original ...
An irreducible and homogeneous Markov chain with finite state space is considered. Under a mild cond...
State space lumping is one of the classical means to fight the state space explosion problem in stat...
This paper extends Markov chain bootstrapping to the case of multivariate continuous-valued stochas...
Markov chains are an essential tool for sampling from large sets, and are ubiquitous across many sci...
International audienceMarkov chains can accurately model the state-to-state dynamics of a wide range...
In this paper, we face a generalization of the problem of finding the distribution of how long it ta...
We face a generalization of the problem of finding the distribution of how long it takes to reach a ...
Numerical methods for solving Markov chains are in general inefficient if the state space of the cha...
AbstractForming lumped states in a Markov chain is a very useful device leading to a coarser level o...
Solving Markov chains is, in general, difficult if the state space of the chain is very large (or in...