This thesis proposes and analyzes several first-order methods for convex optimization, designed for parallel implementation in shared and distributed memory architectures. The theoretical focus is on designing algorithms that can run asynchronously, allowing computing nodes to execute their tasks with stale information without jeopardizing convergence to the optimal solution. The first part of the thesis focuses on shared memory architectures. We propose and analyze a family of algorithms to solve an unconstrained, smooth optimization problem consisting of a large number of component functions. Specifically, we investigate the effect of information delay, inherent in asynchronous implementations, on the convergence properties of the increme...
We propose a novel parallel asynchronous algorithmic framework for the minimization of the sum of a ...
We provide the first theoretical analysis on the convergence rate of asynchronous mini-batch gradie...
In many large-scale optimization problems arising in the context of machine learning the decision va...
International audienceWe describe several features of parallel or distributed asynchronous iterative...
International audienceWe describe several features of parallel or distributed asynchronous iterative...
International audienceWe describe several features of parallel or distributed asynchronous iterative...
We develop and analyze an asynchronous algorithm for distributed convex optimization when the object...
International audienceWe develop and analyze an asynchronous algorithm for distributed convex optimi...
International audienceOne of the most widely used methods for solving large-scale stochastic optimiz...
International audienceOne of the most widely used methods for solving large-scale stochastic optimiz...
International audienceWe develop and analyze an asynchronous algorithm for distributed convex optimi...
In large-scale optimization problems, distributed asynchronous stochastic gradient descent (DASGD) i...
In large-scale optimization problems, distributed asynchronous stochastic gradient descent (DASGD) i...
We describe several features of parallel or distributed asynchronous iterative algorithms such as un...
We introduce novel convergence results for asynchronous iterations which appear in the analysis of p...
We propose a novel parallel asynchronous algorithmic framework for the minimization of the sum of a ...
We provide the first theoretical analysis on the convergence rate of asynchronous mini-batch gradie...
In many large-scale optimization problems arising in the context of machine learning the decision va...
International audienceWe describe several features of parallel or distributed asynchronous iterative...
International audienceWe describe several features of parallel or distributed asynchronous iterative...
International audienceWe describe several features of parallel or distributed asynchronous iterative...
We develop and analyze an asynchronous algorithm for distributed convex optimization when the object...
International audienceWe develop and analyze an asynchronous algorithm for distributed convex optimi...
International audienceOne of the most widely used methods for solving large-scale stochastic optimiz...
International audienceOne of the most widely used methods for solving large-scale stochastic optimiz...
International audienceWe develop and analyze an asynchronous algorithm for distributed convex optimi...
In large-scale optimization problems, distributed asynchronous stochastic gradient descent (DASGD) i...
In large-scale optimization problems, distributed asynchronous stochastic gradient descent (DASGD) i...
We describe several features of parallel or distributed asynchronous iterative algorithms such as un...
We introduce novel convergence results for asynchronous iterations which appear in the analysis of p...
We propose a novel parallel asynchronous algorithmic framework for the minimization of the sum of a ...
We provide the first theoretical analysis on the convergence rate of asynchronous mini-batch gradie...
In many large-scale optimization problems arising in the context of machine learning the decision va...