We analyze the moments of the accumulated reward over the interval (0, t) in a continuous-time Markov chain. We develop a numerical procedure to efficiently compute the normalized moments using the uniformization technique. Our algorithm involves auxiliary quantities whose convergence is analyzed, and for which we provide a probabilistic interpretation
This thesis attempts to bring together two different approaches to the modeling of event driven syst...
One of the most widely used technique to obtain transient measures is the uniformization method. How...
In this paper, a full treatment of homogeneous discrete time Markov reward processes is presented. T...
We analyze the moments of the accumulated reward over the interval (0, t) in a continuous-time Marko...
International audienceWe analyze the moments of the accumulated reward over the interval (0,t) in a ...
Power moments for accumulated rewards defined on Markov and semi-Markov chains are studied. A model ...
AbstractThe majority of computational methods applied for the analysis of homogeneous Markov reward ...
We consider Markov reward processes with finite state space both in discrete- and continuous-time se...
A generally applicable discretization method for computing the transient distribution of the cumulat...
We consider a discrete time Markov reward process with finite state and action spaces and random ret...
We consider the variance of the reward until absorption in a Markov chain. This variance is usually ...
This paper provides a simulated moments estimator (SME) of the parameters of dynamic models in which...
Let (Xi)i=0∞ be a V-uniformly ergodic Markov chain on a general state space, and let π be its statio...
Abstract.The computation of transient probabilities for continuous-time Markov chains often employs ...
AbstractAnalysis of Markov Reward Models (MRM) with preemptive resume (prs) policy results in a doub...
This thesis attempts to bring together two different approaches to the modeling of event driven syst...
One of the most widely used technique to obtain transient measures is the uniformization method. How...
In this paper, a full treatment of homogeneous discrete time Markov reward processes is presented. T...
We analyze the moments of the accumulated reward over the interval (0, t) in a continuous-time Marko...
International audienceWe analyze the moments of the accumulated reward over the interval (0,t) in a ...
Power moments for accumulated rewards defined on Markov and semi-Markov chains are studied. A model ...
AbstractThe majority of computational methods applied for the analysis of homogeneous Markov reward ...
We consider Markov reward processes with finite state space both in discrete- and continuous-time se...
A generally applicable discretization method for computing the transient distribution of the cumulat...
We consider a discrete time Markov reward process with finite state and action spaces and random ret...
We consider the variance of the reward until absorption in a Markov chain. This variance is usually ...
This paper provides a simulated moments estimator (SME) of the parameters of dynamic models in which...
Let (Xi)i=0∞ be a V-uniformly ergodic Markov chain on a general state space, and let π be its statio...
Abstract.The computation of transient probabilities for continuous-time Markov chains often employs ...
AbstractAnalysis of Markov Reward Models (MRM) with preemptive resume (prs) policy results in a doub...
This thesis attempts to bring together two different approaches to the modeling of event driven syst...
One of the most widely used technique to obtain transient measures is the uniformization method. How...
In this paper, a full treatment of homogeneous discrete time Markov reward processes is presented. T...