A wide variety of machine learning problems can be de-scribed as minimizing a regularized risk functional, with dif-ferent algorithms using different notions of risk and different regularizers. Examples include linear Support Vector Ma-chines (SVMs), Logistic Regression, Conditional Random Fields (CRFs), and Lasso amongst others. This paper de-scribes the theory and implementation of a highly scalable and modular convex solver which solves all these estimation problems. It can be parallelized on a cluster of workstations, allows for data-locality, and can deal with regularizers such as `1 and `2 penalties. At present, our solver implements 20 different estimation problems, can be easily extended, scales to millions of observations, and is u...
Modern machine learning systems pose several new statistical, scalability, privacy and ethical chall...
The topic of this dissertation is based on regularization methods and efficient solution path algori...
We consider the problem of supervised learning with convex loss functions and propose a new form of ...
A wide variety of machine learning problems can be described as minimizing a regularized risk functi...
We present a globally convergent method for regularized risk minimization prob-lems. Our method appl...
Abstract. Many machine learning algorithms lead to solving a convex regularized risk minimization pr...
We establish risk bounds for Regularized Empirical Risk Minimizers (RERM) when the loss is Lipschitz...
The scale of modern datasets necessitates the development of efficient distributed and parallel opti...
Statistical modeling with regularized risk minimization Given some data points xi, i = 1,..., n, lea...
Thesis (Ph.D.)--University of Washington, 2016-08Design and analysis of tractable methods for estima...
We present a new algorithm for minimizing a convex loss-function subject to regularization. Our fram...
Regularization techniques have become a principled tool for model-based statistics and artificial in...
In regularized risk minimization, the associated optimization problem becomes particularly difficult...
The scale of modern datasets necessitates the development of efficient distributed optimization meth...
Supervised learning in general and regularized risk minimization in particular is about solving opti...
Modern machine learning systems pose several new statistical, scalability, privacy and ethical chall...
The topic of this dissertation is based on regularization methods and efficient solution path algori...
We consider the problem of supervised learning with convex loss functions and propose a new form of ...
A wide variety of machine learning problems can be described as minimizing a regularized risk functi...
We present a globally convergent method for regularized risk minimization prob-lems. Our method appl...
Abstract. Many machine learning algorithms lead to solving a convex regularized risk minimization pr...
We establish risk bounds for Regularized Empirical Risk Minimizers (RERM) when the loss is Lipschitz...
The scale of modern datasets necessitates the development of efficient distributed and parallel opti...
Statistical modeling with regularized risk minimization Given some data points xi, i = 1,..., n, lea...
Thesis (Ph.D.)--University of Washington, 2016-08Design and analysis of tractable methods for estima...
We present a new algorithm for minimizing a convex loss-function subject to regularization. Our fram...
Regularization techniques have become a principled tool for model-based statistics and artificial in...
In regularized risk minimization, the associated optimization problem becomes particularly difficult...
The scale of modern datasets necessitates the development of efficient distributed optimization meth...
Supervised learning in general and regularized risk minimization in particular is about solving opti...
Modern machine learning systems pose several new statistical, scalability, privacy and ethical chall...
The topic of this dissertation is based on regularization methods and efficient solution path algori...
We consider the problem of supervised learning with convex loss functions and propose a new form of ...