Suppose we observe a geometrically ergodic Markov chain with a parametric model for the marginal, but no (further) information about the transition distribution. Then the empirical estimator for a linear functional of the joint law of two successive observations is no longer efficient. We construct an improved estimator and show that it is efficient. The construction is similar to a recent one for bivariate models with parametric marginals. The result applies to discretely observed parametric continuous-time processes. AMS 2000 subject classifications. Primary 62G20, 62G30, 62M05. Key words and Phrases. Least squares estimator, series estimator, orthonormal basis, effi-cient influence function, reversible Markov chain, discretely observed d...
: A new type of martingale estimating function is proposed for inference about classes of diffusion ...
This paper introduces a new parameter estimator of dynamic models in which the state is a multidimen...
Abstract: Suppose we observe a discrete-time Markov chain at certain periodic or random time points ...
Suppose we observe a geometrically ergodic Markov chain with a parametric model for the marginal, bu...
Suppose we observe a geometrically ergodic Markov chain with a parametric model for the marginal, bu...
If we have a parametric model for the invariant distribution of a Markov chain but cannot or do not ...
We characterize efficient estimators for the expectation of a function under the invariant distribut...
A semi-Markov process stays in state x for a time s and then jumps to state y according to a transi...
The distribution of a homogeneous, continuous-time Markov step process with values in an arbitrary s...
We consider semiparametric models of semi-Markov processes with arbitrary state space. Assuming that...
We consider semiparametric models of semi-Markov processes with arbitrary state space. Assuming that...
AbstractThe distribution of a homogeneous, continuous-time Markov step process with values in an arb...
AbstractThe distribution of a homogeneous, continuous-time Markov step process with values in an arb...
AbstractA multivariate point process is a random jump measure in time and space. Its distribution is...
AbstractA multivariate point process is a random jump measure in time and space. Its distribution is...
: A new type of martingale estimating function is proposed for inference about classes of diffusion ...
This paper introduces a new parameter estimator of dynamic models in which the state is a multidimen...
Abstract: Suppose we observe a discrete-time Markov chain at certain periodic or random time points ...
Suppose we observe a geometrically ergodic Markov chain with a parametric model for the marginal, bu...
Suppose we observe a geometrically ergodic Markov chain with a parametric model for the marginal, bu...
If we have a parametric model for the invariant distribution of a Markov chain but cannot or do not ...
We characterize efficient estimators for the expectation of a function under the invariant distribut...
A semi-Markov process stays in state x for a time s and then jumps to state y according to a transi...
The distribution of a homogeneous, continuous-time Markov step process with values in an arbitrary s...
We consider semiparametric models of semi-Markov processes with arbitrary state space. Assuming that...
We consider semiparametric models of semi-Markov processes with arbitrary state space. Assuming that...
AbstractThe distribution of a homogeneous, continuous-time Markov step process with values in an arb...
AbstractThe distribution of a homogeneous, continuous-time Markov step process with values in an arb...
AbstractA multivariate point process is a random jump measure in time and space. Its distribution is...
AbstractA multivariate point process is a random jump measure in time and space. Its distribution is...
: A new type of martingale estimating function is proposed for inference about classes of diffusion ...
This paper introduces a new parameter estimator of dynamic models in which the state is a multidimen...
Abstract: Suppose we observe a discrete-time Markov chain at certain periodic or random time points ...