The following modification of a general state space discrete-time Markov chain is considered: certain transitions are supposed "forbidden" and the chain evolves until there is such a transition. At this instant the value of the chain is "replaced" according to a given rule, and, starting from the new value, the chain evolves normally until there is a forbidden transition again; the cycle is then repeated. The relationship of this modified process to the original one is studied in general terms, with particular emphasis being given to invariant measures. Examples are given which illustrate the results obtained.Markov Chain replacement invariant measure general state space forbidden transition
We have studied Markov processes on denumerable state space and continuous time. We found that all t...
We consider finite-state Markov chains driven by stationary ergodic invertible processes representin...
We treat a special type of Markov chain with a finite state space. This type of Markov chain often a...
Time reversibility plays an important role in the analysis of continuous and discrete time Markov ch...
AbstractWe consider a periodic absorbing Markov chain for which each time absorption occurs there is...
We consider a periodic absorbing Markov chain for which each time absorption occurs there is a reset...
We study irreducible time-homogenous Markov chains with finite state space in discrete time. We obta...
AbstractAn increasing sequence of random times {Tn, n ⩾ 0} is called a Markov time change if {X(Tn)}...
We propose an alternate parameterization of stationary regular finite-state Markov chains, and a dec...
A discrete-time Markov chain on the interval [0, 1] with two possible transitions (left or right) at...
International audienceThis book concerns discrete-time homogeneous Markov chains that admit an invar...
φT = (φ1, φ2,..., φm) for an m-state homogeneous irreducible Markov chain with transition probabilit...
The theory of time-reversibility has been widely used to derive the expressions of the invariant mea...
Markov chains are useful to model various complex systems. In numerous situations, the underlying Ma...
A Markov chain (with a discrete state space and a continuous parameter) is perturbed by forcing a ch...
We have studied Markov processes on denumerable state space and continuous time. We found that all t...
We consider finite-state Markov chains driven by stationary ergodic invertible processes representin...
We treat a special type of Markov chain with a finite state space. This type of Markov chain often a...
Time reversibility plays an important role in the analysis of continuous and discrete time Markov ch...
AbstractWe consider a periodic absorbing Markov chain for which each time absorption occurs there is...
We consider a periodic absorbing Markov chain for which each time absorption occurs there is a reset...
We study irreducible time-homogenous Markov chains with finite state space in discrete time. We obta...
AbstractAn increasing sequence of random times {Tn, n ⩾ 0} is called a Markov time change if {X(Tn)}...
We propose an alternate parameterization of stationary regular finite-state Markov chains, and a dec...
A discrete-time Markov chain on the interval [0, 1] with two possible transitions (left or right) at...
International audienceThis book concerns discrete-time homogeneous Markov chains that admit an invar...
φT = (φ1, φ2,..., φm) for an m-state homogeneous irreducible Markov chain with transition probabilit...
The theory of time-reversibility has been widely used to derive the expressions of the invariant mea...
Markov chains are useful to model various complex systems. In numerous situations, the underlying Ma...
A Markov chain (with a discrete state space and a continuous parameter) is perturbed by forcing a ch...
We have studied Markov processes on denumerable state space and continuous time. We found that all t...
We consider finite-state Markov chains driven by stationary ergodic invertible processes representin...
We treat a special type of Markov chain with a finite state space. This type of Markov chain often a...