Stochastic approximation (SA) methods, first proposed by Robbins and Monro in 1951 for root- finding problems, have been widely used in the literature to solve problems arising from stochastic convex optimization, stochastic Nash games and more recently stochastic variational inequalities. Several challenges arise in the development of SA schemes. First, little guidance is provided on the choice of the steplength sequence. Second, most variants of these schemes in optimization require differentiability of the objective function and Lipschitz continuity of the gradient. Finally, strong convexity of the objective function is another requirement that is a strong assumption to hold. Motivated by these challenges, this thesis focuses on s...
AbstractA stochastic subgradient method for solving convex stochastic programming problems is consid...
This dissertation investigates the use of sampling methods for solving stochastic optimization probl...
In this paper, we study a stochastic strongly convex optimization problem and propose three classes ...
Stochastic approximation (SA) methods, first proposed by Robbins and Monro in 1951 for root- findin...
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochasti...
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochasti...
This paper considers stochastic variational inequality (SVI) problems where the mapping is merely mo...
We consider a stochastic variational inequality (SVI) problem with a continuous and monotone mapping...
Traditionally, much of the research in the field of optimization algorithms has assumed that problem...
Traditionally, much of the research in the field of optimization algorithms has assumed that problem...
In this paper we propose several adaptive gradient methods for stochastic optimization. Our methods ...
Stochastic Gradient (SG) is the defacto iterative technique to solve stochastic optimization (SO) pr...
In this thesis we study iterative algorithms in order to solve constrained and unconstrained convex ...
In this thesis we study iterative algorithms in order to solve constrained and unconstrained convex ...
We consider a class of stochastic smooth convex optimization problems under rather general assumptio...
AbstractA stochastic subgradient method for solving convex stochastic programming problems is consid...
This dissertation investigates the use of sampling methods for solving stochastic optimization probl...
In this paper, we study a stochastic strongly convex optimization problem and propose three classes ...
Stochastic approximation (SA) methods, first proposed by Robbins and Monro in 1951 for root- findin...
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochasti...
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochasti...
This paper considers stochastic variational inequality (SVI) problems where the mapping is merely mo...
We consider a stochastic variational inequality (SVI) problem with a continuous and monotone mapping...
Traditionally, much of the research in the field of optimization algorithms has assumed that problem...
Traditionally, much of the research in the field of optimization algorithms has assumed that problem...
In this paper we propose several adaptive gradient methods for stochastic optimization. Our methods ...
Stochastic Gradient (SG) is the defacto iterative technique to solve stochastic optimization (SO) pr...
In this thesis we study iterative algorithms in order to solve constrained and unconstrained convex ...
In this thesis we study iterative algorithms in order to solve constrained and unconstrained convex ...
We consider a class of stochastic smooth convex optimization problems under rather general assumptio...
AbstractA stochastic subgradient method for solving convex stochastic programming problems is consid...
This dissertation investigates the use of sampling methods for solving stochastic optimization probl...
In this paper, we study a stochastic strongly convex optimization problem and propose three classes ...