Stochastic approximation is one of the effective approach to deal with the large-scale machine learning problems and the recent research has focused on reduction of variance, caused by the noisy approximations of the gradients. In this paper, we have proposed novel variants of SAAG-I and II (Stochastic Average Adjusted Gradient) (Chauhan et al. 2017), called SAAG-III and IV, respectively. Unlike SAAG-I, starting point is set to average of previous epoch in SAAG-III, and unlike SAAG-II, the snap point and starting point are set to average and last iterate of previous epoch in SAAG-IV, respectively. To determine the step size, we have used Stochastic Backtracking-Armijo line Search (SBAS) which performs line search only on selected mini-batch...
Statistical inference, such as hypothesis testing and calculating a confidence interval, is an impor...
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAG...
The field of statistical machine learning has seen a rapid progress in complex hierarchical Bayesian...
Stochastic approximation is one of the effective approach to deal with the large-scale machine learn...
This work considers optimization methods for large-scale machine learning (ML). Optimization in ML ...
With the purpose of examining biased updates in variance-reduced stochastic gradient methods, we int...
With the purpose of examining biased updates in variance-reduced stochastic gradient methods, we int...
Stochastic gradient descent is popular for large scale optimization but has slow convergence asympto...
Stochastic gradient descent is popular for large scale optimization but has slow convergence asympto...
International audienceStochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its ...
Recent years have witnessed huge advances in machine learning (ML) and its applications, especially ...
Current machine learning practice requires solving huge-scale empirical risk minimization problems q...
International audienceIn this paper, we propose a novel reinforcement-learning algorithm consisting ...
International audienceStochastic approximation (SA) is a classical algorithm that has had since the ...
University of Minnesota Ph.D. dissertation. April 2020. Major: Computer Science. Advisor: Arindam Ba...
Statistical inference, such as hypothesis testing and calculating a confidence interval, is an impor...
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAG...
The field of statistical machine learning has seen a rapid progress in complex hierarchical Bayesian...
Stochastic approximation is one of the effective approach to deal with the large-scale machine learn...
This work considers optimization methods for large-scale machine learning (ML). Optimization in ML ...
With the purpose of examining biased updates in variance-reduced stochastic gradient methods, we int...
With the purpose of examining biased updates in variance-reduced stochastic gradient methods, we int...
Stochastic gradient descent is popular for large scale optimization but has slow convergence asympto...
Stochastic gradient descent is popular for large scale optimization but has slow convergence asympto...
International audienceStochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its ...
Recent years have witnessed huge advances in machine learning (ML) and its applications, especially ...
Current machine learning practice requires solving huge-scale empirical risk minimization problems q...
International audienceIn this paper, we propose a novel reinforcement-learning algorithm consisting ...
International audienceStochastic approximation (SA) is a classical algorithm that has had since the ...
University of Minnesota Ph.D. dissertation. April 2020. Major: Computer Science. Advisor: Arindam Ba...
Statistical inference, such as hypothesis testing and calculating a confidence interval, is an impor...
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAG...
The field of statistical machine learning has seen a rapid progress in complex hierarchical Bayesian...