Let (X,Px) be a continuous time Markov chain with finite or countable state space S and let T be its first passage time in a subset D of S. It is well known that if μ is a quasi-stationary distribution relative to T, then this time is exponentially distributed under Pμ. However, quasi-stationarity is not a necessary condition. In this paper, we determine more general conditions on an initial distribution μ for T to be exponentially distributed under Pμ. We show in addition how quasi-stationary distributions can be expressed in terms of any initial law which makes the distribution of T exponential. We also study two examples in branching processes where exponentiality does imply quasi-stationarity
AbstractIf a Markov chain converges rapidly to stationarity, then the time until the first hit on a ...
This paper contains a survey of results related to quasi-stationary distributions, which arise in th...
In this paper, the aim is to study similarities and differences between a continuous-time Markov cha...
Abstract. Let (X,Px) be a continuous time Markov chain with finite or countable state space S and le...
Abstract Let (X,Px) be a continuous time Markov chain with finite or countable state space S and let...
This paper contains a survey of results related to quasi-stationary distributions, which arise in th...
This paper is concerned with the circumstances under which a discrete-time absorbing Markov chain ha...
International audienceFor Markov processes with absorption, we provide general criteria ensuring the...
This paper establishes exponential convergence to a unique quasi-stationary distribution in the tota...
We consider a simple and widely used method for evaluating quasistationary distributions of continuo...
Recently, Elmes, Pollett and Walker [2] proposed a definition of a quasistationary distribution to a...
There are many stochastic systems arising in areas as diverse as wildlife management, chemical kinet...
We shall study continuous-time Markov chains on the nonnegative integers which are both irreducible ...
We investigate how the correlation properties of a stationary Markovian stochastic process affect it...
Many Markov chains with a single absorbing state have a unique limiting conditional distribution (LC...
AbstractIf a Markov chain converges rapidly to stationarity, then the time until the first hit on a ...
This paper contains a survey of results related to quasi-stationary distributions, which arise in th...
In this paper, the aim is to study similarities and differences between a continuous-time Markov cha...
Abstract. Let (X,Px) be a continuous time Markov chain with finite or countable state space S and le...
Abstract Let (X,Px) be a continuous time Markov chain with finite or countable state space S and let...
This paper contains a survey of results related to quasi-stationary distributions, which arise in th...
This paper is concerned with the circumstances under which a discrete-time absorbing Markov chain ha...
International audienceFor Markov processes with absorption, we provide general criteria ensuring the...
This paper establishes exponential convergence to a unique quasi-stationary distribution in the tota...
We consider a simple and widely used method for evaluating quasistationary distributions of continuo...
Recently, Elmes, Pollett and Walker [2] proposed a definition of a quasistationary distribution to a...
There are many stochastic systems arising in areas as diverse as wildlife management, chemical kinet...
We shall study continuous-time Markov chains on the nonnegative integers which are both irreducible ...
We investigate how the correlation properties of a stationary Markovian stochastic process affect it...
Many Markov chains with a single absorbing state have a unique limiting conditional distribution (LC...
AbstractIf a Markov chain converges rapidly to stationarity, then the time until the first hit on a ...
This paper contains a survey of results related to quasi-stationary distributions, which arise in th...
In this paper, the aim is to study similarities and differences between a continuous-time Markov cha...