We analyze two communication-efficient algorithms for distributed optimization in statistical set-tings involving large-scale data sets. The first algorithm is a standard averaging method that distributes the N data samples evenly to m machines, performs separate minimization on each subset, and then averages the estimates. We provide a sharp analysis of this average mixture algorithm, showing that under a reasonable set of conditions, the combined parameter achieves mean-squared error (MSE) that decays as O(N−1 +(N/m)−2). Whenever m ≤ √N, this guaran-tee matches the best possible rate achievable by a centralized algorithm having access to all N samples. The second algorithm is a novel method, based on an appropriate form of bootstrap subsa...
Traditional machine learning models can be formulated as the expected risk minimization (ERM) proble...
We study optimization algorithms for the finite sum problems frequently arising in machine learning...
Abstract—We consider the problem of distributed stochastic optimization, where each of several machi...
Distributed machine learning bridges the traditional fields of distributed systems and machine learn...
Modern machine learning systems pose several new statistical, scalability, privacy and ethical chall...
<p>This thesis is concerned with the design of distributed algorithms for solving optimization probl...
This dissertation deals with developing optimization algorithms which can be distributed over a netw...
We consider the problem of communication efficient distributed optimization where multiple nodes exc...
We consider decentralized stochastic optimization with the objective function (e.g. data samples for...
International audienceWe propose distributed algorithms for high-dimensional sparse optimization. In...
Classically, the performance of estimators in statistical learning problems is measured in terms of ...
It is well known that stochastic optimization algorithms are both theoretically and practically well...
In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed...
In recent years, the rapid development of new generation information technology has resulted in an u...
Optimization has been the workhorse of solving machine learning problems. However, the efficiency of...
Traditional machine learning models can be formulated as the expected risk minimization (ERM) proble...
We study optimization algorithms for the finite sum problems frequently arising in machine learning...
Abstract—We consider the problem of distributed stochastic optimization, where each of several machi...
Distributed machine learning bridges the traditional fields of distributed systems and machine learn...
Modern machine learning systems pose several new statistical, scalability, privacy and ethical chall...
<p>This thesis is concerned with the design of distributed algorithms for solving optimization probl...
This dissertation deals with developing optimization algorithms which can be distributed over a netw...
We consider the problem of communication efficient distributed optimization where multiple nodes exc...
We consider decentralized stochastic optimization with the objective function (e.g. data samples for...
International audienceWe propose distributed algorithms for high-dimensional sparse optimization. In...
Classically, the performance of estimators in statistical learning problems is measured in terms of ...
It is well known that stochastic optimization algorithms are both theoretically and practically well...
In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed...
In recent years, the rapid development of new generation information technology has resulted in an u...
Optimization has been the workhorse of solving machine learning problems. However, the efficiency of...
Traditional machine learning models can be formulated as the expected risk minimization (ERM) proble...
We study optimization algorithms for the finite sum problems frequently arising in machine learning...
Abstract—We consider the problem of distributed stochastic optimization, where each of several machi...