summary:We deal with a stochastic programming problem that can be inconsistent. To overcome the inconsistency we apply Tikhonov's regularization technique, and, using recent results on the convergence rate of empirical measures in Wasserstein metric, we treat the following two related problems: 1. A choice of regularization parameters that guarantees the convergence of the minimization procedure. 2. Estimation of the rate of convergence in probability. Considering both light and heavy tail distributions and Lipschitz objective functions (which can be unbounded), we obtain the power bounds for the convergence rate
There exists a vast literature on convergence rates results for Tikhonov regularized minimizers. We ...
We consider a function defined as the pointwise minimization of a doubly index random process. We ar...
The vast majority of convergence rates analysis for stochastic gradient methods in the literature fo...
summary:We deal with a stochastic programming problem that can be inconsistent. To overcome the inco...
A new concept of (normalized) convergence of random variables is introduced. This convergence is pr...
Fortet-Mourier (FM) probability metrics are important probability metrics, which have been widely ad...
We present a discrepancy-based parameter choice and stopping rule for iterative algorithms performin...
It is known that optimization problems depending on a probability measure correspond to many applica...
We consider the weak convergence of numerical methods for stochastic differential equations (SDEs). ...
summary:“Classical” optimization problems depending on a probability measure belong mostly to nonlin...
A central problem in statistical learning is to design prediction algorithms that not only perform w...
A central problem in statistical learning is to design prediction algorithms that not only perform w...
We study convergence properties of empirical minimization of a stochastic strongly convex objective,...
The dissertation suggests a generalized version of Tikhonov regularization and analyzes its properti...
The dissertation suggests a generalized version of Tikhonov regularization and analyzes its properti...
There exists a vast literature on convergence rates results for Tikhonov regularized minimizers. We ...
We consider a function defined as the pointwise minimization of a doubly index random process. We ar...
The vast majority of convergence rates analysis for stochastic gradient methods in the literature fo...
summary:We deal with a stochastic programming problem that can be inconsistent. To overcome the inco...
A new concept of (normalized) convergence of random variables is introduced. This convergence is pr...
Fortet-Mourier (FM) probability metrics are important probability metrics, which have been widely ad...
We present a discrepancy-based parameter choice and stopping rule for iterative algorithms performin...
It is known that optimization problems depending on a probability measure correspond to many applica...
We consider the weak convergence of numerical methods for stochastic differential equations (SDEs). ...
summary:“Classical” optimization problems depending on a probability measure belong mostly to nonlin...
A central problem in statistical learning is to design prediction algorithms that not only perform w...
A central problem in statistical learning is to design prediction algorithms that not only perform w...
We study convergence properties of empirical minimization of a stochastic strongly convex objective,...
The dissertation suggests a generalized version of Tikhonov regularization and analyzes its properti...
The dissertation suggests a generalized version of Tikhonov regularization and analyzes its properti...
There exists a vast literature on convergence rates results for Tikhonov regularized minimizers. We ...
We consider a function defined as the pointwise minimization of a doubly index random process. We ar...
The vast majority of convergence rates analysis for stochastic gradient methods in the literature fo...