We propose a new distributed algorithm for em-pirical risk minimization in machine learning. The algorithm is based on an inexact damped Newton method, where the inexact Newton steps are computed by a distributed preconditioned conjugate gradient method. We analyze its iter-ation complexity and communication efficiency for minimizing self-concordant empirical loss functions, and discuss the results for distributed ridge regression, logistic regression and binary classification with a smoothed hinge loss. In a standard setting for supervised learning, where the n data points are i.i.d. sampled and when the regularization parameter scales as 1/ n, we show that the proposed algorithm is communica-tion efficient: the required round of communica...
We analyze two communication-efficient algorithms for distributed optimization in statistical set-ti...
We study distributed inference, learning and optimization in scenarios which involve networked entit...
A central problem in statistical learning is to design prediction algorithms that not only perform w...
Distributed machine learning bridges the traditional fields of distributed systems and machine learn...
Modern machine learning systems pose several new statistical, scalability, privacy and ethical chall...
We present a novel Newton-type method for distributed optimization, which is particularly well suite...
The scale of modern datasets necessitates the development of efficient distributed optimization meth...
We study the distributed machine learning problem where the n feature-response pairs are partitioned...
Empirical risk minimization (ERM) problems express optimal classifiers as solutions of optimization ...
A common approach to statistical learning with big-data is to randomly split it among m machines and...
Empirical risk minimization (ERM) problems express optimal classifiers as solutions of optimization ...
Decentralized learning algorithms empower interconnected devices to share data and computational res...
We present a novel Newton-type method for dis-tributed optimization, which is particularly well suit...
Traditional machine learning models can be formulated as the expected risk minimization (ERM) proble...
The scale of modern datasets necessitates the development of efficient distributed and parallel opti...
We analyze two communication-efficient algorithms for distributed optimization in statistical set-ti...
We study distributed inference, learning and optimization in scenarios which involve networked entit...
A central problem in statistical learning is to design prediction algorithms that not only perform w...
Distributed machine learning bridges the traditional fields of distributed systems and machine learn...
Modern machine learning systems pose several new statistical, scalability, privacy and ethical chall...
We present a novel Newton-type method for distributed optimization, which is particularly well suite...
The scale of modern datasets necessitates the development of efficient distributed optimization meth...
We study the distributed machine learning problem where the n feature-response pairs are partitioned...
Empirical risk minimization (ERM) problems express optimal classifiers as solutions of optimization ...
A common approach to statistical learning with big-data is to randomly split it among m machines and...
Empirical risk minimization (ERM) problems express optimal classifiers as solutions of optimization ...
Decentralized learning algorithms empower interconnected devices to share data and computational res...
We present a novel Newton-type method for dis-tributed optimization, which is particularly well suit...
Traditional machine learning models can be formulated as the expected risk minimization (ERM) proble...
The scale of modern datasets necessitates the development of efficient distributed and parallel opti...
We analyze two communication-efficient algorithms for distributed optimization in statistical set-ti...
We study distributed inference, learning and optimization in scenarios which involve networked entit...
A central problem in statistical learning is to design prediction algorithms that not only perform w...