Appears in Advances in Neural Information Processing Systems 30 (NIPS 2017), 28 pagesInternational audienceDue to their simplicity and excellent performance, parallel asynchronous variants of stochastic gradient descent have become popular methods to solve a wide range of large-scale optimization problems on multi-core architectures. Yet, despite their practical success, support for nonsmooth objectives is still lacking, making them unsuitable for many problems of interest in machine learning, such as the Lasso, group Lasso or empirical risk minimization with convex constraints. In this work, we propose and analyze ProxASAGA, a fully asynchronous sparse method inspired by SAGA, a variance reduced incremental gradient algorithm. The proposed...
This thesis aims at developing efficient optimization algorithms for solving large-scale machine lea...
The dissertation addresses the research topics of machine learning outlined below. We developed the ...
Finding convergence rates for numerical optimization algorithms is an important task, because it giv...
International audienceWe describe Asaga, an asynchronous parallel version of the incremental gradien...
International audienceIn this work we introduce a new optimisation method called SAGA in the spirit ...
International audienceAs datasets continue to increase in size and multi-core computer architectures...
Nesterov's accelerated gradient (AG) is a popular technique to optimize objective functions comprisi...
We propose a new asynchronous parallel block-descent algorithmic framework for the minimization of t...
We study stochastic optimization problems when the data is sparse, which is in a sense dual to curre...
This thesis proposes parallel and distributed algorithms for solving very largescale sparse optimiza...
In machine learning research, many emerging applications can be (re)formulated as the composition op...
Parallel and distributed algorithms have become a necessity in modern machine learning tasks. In th...
We propose a novel parallel asynchronous algorithmic framework for the minimization of the sum of a ...
Stochastic Gradient Descent (SGD) is very useful in optimization problems with high-dimensional non-...
In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and...
This thesis aims at developing efficient optimization algorithms for solving large-scale machine lea...
The dissertation addresses the research topics of machine learning outlined below. We developed the ...
Finding convergence rates for numerical optimization algorithms is an important task, because it giv...
International audienceWe describe Asaga, an asynchronous parallel version of the incremental gradien...
International audienceIn this work we introduce a new optimisation method called SAGA in the spirit ...
International audienceAs datasets continue to increase in size and multi-core computer architectures...
Nesterov's accelerated gradient (AG) is a popular technique to optimize objective functions comprisi...
We propose a new asynchronous parallel block-descent algorithmic framework for the minimization of t...
We study stochastic optimization problems when the data is sparse, which is in a sense dual to curre...
This thesis proposes parallel and distributed algorithms for solving very largescale sparse optimiza...
In machine learning research, many emerging applications can be (re)formulated as the composition op...
Parallel and distributed algorithms have become a necessity in modern machine learning tasks. In th...
We propose a novel parallel asynchronous algorithmic framework for the minimization of the sum of a ...
Stochastic Gradient Descent (SGD) is very useful in optimization problems with high-dimensional non-...
In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and...
This thesis aims at developing efficient optimization algorithms for solving large-scale machine lea...
The dissertation addresses the research topics of machine learning outlined below. We developed the ...
Finding convergence rates for numerical optimization algorithms is an important task, because it giv...