We describe Asaga, an asynchronous parallel version of the incremental gradient algorithm Saga that enjoys fast linear convergence rates. We highlight a subtle but important technical issue present in a large fraction of the recent convergence rate proofs for asynchronous parallel optimization algorithms, and propose a simplification of the recently proposed " perturbed iterate " framework that resolves it. We thereby prove that Asaga can obtain a theoretical linear speedup on multi-core systems even without sparsity assumptions. We present results of an implementation on a 40-core architecture illustrating the practical speedup as well as the hardware overhead
In high performance computing environments, we observe an ongoing increase in the available numbers ...
International audienceOne of the most widely used training methods for large-scale machine learning ...
AbstractThis paper explores the need for asynchronous iteration algorithms as smoothers in multigrid...
International audienceWe describe Asaga, an asynchronous parallel version of the incremental gradien...
International audienceAs datasets continue to increase in size and multi-core computer architectures...
Appears in Advances in Neural Information Processing Systems 30 (NIPS 2017), 28 pagesInternational a...
Finding convergence rates for numerical optimization algorithms is an important task, because it giv...
In this thesis, we present a body of work on the performance and convergence properties of asynchron...
When solving massive optimization problems in areas such as machine learning, it is a common practic...
The impressive breakthroughs of the last two decades in the field of machine learning can be in larg...
Speeding up gradient based methods has been a subject of interest over the past years with many prac...
This paper explores the need for asynchronous iteration algorithms as smoothers in multigrid methods...
Parallel and distributed algorithms have become a necessity in modern machine learning tasks. In th...
It is well known that synchronization and communication delays are the major sources of performance ...
International audienceIn this work we introduce a new optimisation method called SAGA in the spirit ...
In high performance computing environments, we observe an ongoing increase in the available numbers ...
International audienceOne of the most widely used training methods for large-scale machine learning ...
AbstractThis paper explores the need for asynchronous iteration algorithms as smoothers in multigrid...
International audienceWe describe Asaga, an asynchronous parallel version of the incremental gradien...
International audienceAs datasets continue to increase in size and multi-core computer architectures...
Appears in Advances in Neural Information Processing Systems 30 (NIPS 2017), 28 pagesInternational a...
Finding convergence rates for numerical optimization algorithms is an important task, because it giv...
In this thesis, we present a body of work on the performance and convergence properties of asynchron...
When solving massive optimization problems in areas such as machine learning, it is a common practic...
The impressive breakthroughs of the last two decades in the field of machine learning can be in larg...
Speeding up gradient based methods has been a subject of interest over the past years with many prac...
This paper explores the need for asynchronous iteration algorithms as smoothers in multigrid methods...
Parallel and distributed algorithms have become a necessity in modern machine learning tasks. In th...
It is well known that synchronization and communication delays are the major sources of performance ...
International audienceIn this work we introduce a new optimisation method called SAGA in the spirit ...
In high performance computing environments, we observe an ongoing increase in the available numbers ...
International audienceOne of the most widely used training methods for large-scale machine learning ...
AbstractThis paper explores the need for asynchronous iteration algorithms as smoothers in multigrid...