Recent years have witnessed huge advances in machine learning (ML) and its applications, especially in image, speech, and language applications. Optimization in ML is a key ingredient in both the training and hyperparameter tuning steps, and it also influences the test phase. In this thesis, we make improvements to optimization of all of these three problems. The first part of the thesis considers the training problem. We present the first linearly-convergent stochastic gradient method to train conditional random fields (CRFs) using the stochastic average gradient (SAG) method. Our method addresses the memory issues required for SAG and proposes a better non-uniform sampling (NUS) technique. The second part of the thesis deals with memory-f...
This paper explores using a stochastic average gradient (SAG) algorithm for train-ing conditional ra...
Recently, Stochastic Gradient Descent (SGD) and its variants have become the dominant methods in the...
Stochastic and data-distributed optimization algorithms have received lots of attention from the mac...
Recent years have witnessed huge advances in machine learning (ML) and its applications, especially ...
This work considers optimization methods for large-scale machine learning (ML). Optimization in ML ...
This work considers optimization methods for large-scale machine learning (ML). Optimization in ML ...
Optimization has been the workhorse of solving machine learning problems. However, the efficiency of...
Optimization has been the workhorse of solving machine learning problems. However, the efficiency of...
University of Minnesota Ph.D. dissertation. April 2020. Major: Computer Science. Advisor: Arindam Ba...
The interplay between optimization and machine learning is one of the most important developments in...
The interplay between optimization and machine learning is one of the most important developments in...
The interplay between optimization and machine learning is one of the most important developments in...
The interplay between optimization and machine learning is one of the most important developments in...
The interplay between optimization and machine learning is one of the most important developments in...
Classical optimization techniques have found widespread use in machine learning. Convex optimization...
This paper explores using a stochastic average gradient (SAG) algorithm for train-ing conditional ra...
Recently, Stochastic Gradient Descent (SGD) and its variants have become the dominant methods in the...
Stochastic and data-distributed optimization algorithms have received lots of attention from the mac...
Recent years have witnessed huge advances in machine learning (ML) and its applications, especially ...
This work considers optimization methods for large-scale machine learning (ML). Optimization in ML ...
This work considers optimization methods for large-scale machine learning (ML). Optimization in ML ...
Optimization has been the workhorse of solving machine learning problems. However, the efficiency of...
Optimization has been the workhorse of solving machine learning problems. However, the efficiency of...
University of Minnesota Ph.D. dissertation. April 2020. Major: Computer Science. Advisor: Arindam Ba...
The interplay between optimization and machine learning is one of the most important developments in...
The interplay between optimization and machine learning is one of the most important developments in...
The interplay between optimization and machine learning is one of the most important developments in...
The interplay between optimization and machine learning is one of the most important developments in...
The interplay between optimization and machine learning is one of the most important developments in...
Classical optimization techniques have found widespread use in machine learning. Convex optimization...
This paper explores using a stochastic average gradient (SAG) algorithm for train-ing conditional ra...
Recently, Stochastic Gradient Descent (SGD) and its variants have become the dominant methods in the...
Stochastic and data-distributed optimization algorithms have received lots of attention from the mac...