The ideal estimation method needs to fulfill three requirements: (i) efficient computation, (ii) statistical efficiency, and (iii) numerical stability. The classical stochastic approximation of (Robbins, 1951) is an iterative estimation method, where the current iterate (parameter estimate) is updated according to some discrepancy between what is observed and what is expected assuming the current iterate has the true parameter value. Classical stochastic approximation undoubtedly meets the computation requirement, which explains its widespread popularity, for example, in modern applications of machine learning with large data sets, but cannot effectively combine it with efficiency and stability. Surprisingly, the stability issue can be impr...
abstract (abridged): many of the present problems in automatic control economic systems and living o...
The problem of estimating large scale implicit (non-recursive) models by two- stage methods is consi...
Within a statistical learning setting, we propose and study an iterative regularization algorithm fo...
Efficient optimization procedures, such as stochastic gradient descent, have been gaining popularity...
Stochastic gradient descent procedures have gained popularity for parameter estimation from large da...
Stochastic approximation algorithms are iterative procedures which are used to approximate a target ...
Iterative Monte Carlo methods, such as MCMC, stochastic approximation, and EM, have proven to be ver...
The oldest stochastic approximation method is the Robbins–Monro process. This estimates an unknown s...
This is the final version of the article. It first appeared from Neural Information Processing Syste...
Implicit sampling is a weighted sampling method that is used in data assimilation to sequentially up...
This thesis presents some broadly applicable algorithms for computing maximum likelihood estimates (...
Historically, the choice of method for a given statistical problem has been primarily driven by two ...
Abstract. Stochastic-approximation gradient methods are attractive for large-scale convex optimizati...
Many modern estimation methods in econometrics approximate an objective function, through simulation...
We consider the Kiefer-Wolfowitz (KW) stochastic approximation algorithm and derive gen-eral upper b...
abstract (abridged): many of the present problems in automatic control economic systems and living o...
The problem of estimating large scale implicit (non-recursive) models by two- stage methods is consi...
Within a statistical learning setting, we propose and study an iterative regularization algorithm fo...
Efficient optimization procedures, such as stochastic gradient descent, have been gaining popularity...
Stochastic gradient descent procedures have gained popularity for parameter estimation from large da...
Stochastic approximation algorithms are iterative procedures which are used to approximate a target ...
Iterative Monte Carlo methods, such as MCMC, stochastic approximation, and EM, have proven to be ver...
The oldest stochastic approximation method is the Robbins–Monro process. This estimates an unknown s...
This is the final version of the article. It first appeared from Neural Information Processing Syste...
Implicit sampling is a weighted sampling method that is used in data assimilation to sequentially up...
This thesis presents some broadly applicable algorithms for computing maximum likelihood estimates (...
Historically, the choice of method for a given statistical problem has been primarily driven by two ...
Abstract. Stochastic-approximation gradient methods are attractive for large-scale convex optimizati...
Many modern estimation methods in econometrics approximate an objective function, through simulation...
We consider the Kiefer-Wolfowitz (KW) stochastic approximation algorithm and derive gen-eral upper b...
abstract (abridged): many of the present problems in automatic control economic systems and living o...
The problem of estimating large scale implicit (non-recursive) models by two- stage methods is consi...
Within a statistical learning setting, we propose and study an iterative regularization algorithm fo...