Free to read at publisher website We study accelerated descent dynamics for constrained convex optimization. This dynamics can be described naturally as a coupling of a dual variable accumulating gradients at a given rate , and a primal variable obtained as the weighted average of the mirrored dual trajectory, with weights . Using a Lyapunov argument, we give sufficient conditions on and to achieve a desired convergence rate. As an example, we show that the replicator dynamics (an example of mirror descent on the simplex) can be accelerated using a simple averaging scheme. We then propose an adaptive averaging heuristic which adaptively computes the weights to speed up the decrease of the Lyapunov function. We provide guarantees on adaptive...
First-order methods play a central role in large-scale convex optimization. Despite their various fo...
Motivated by the fact that the gradient-based optimization algorithms can be studied from the perspe...
International audienceIn a Hilbert framework, for general convex differentiable optimization, we con...
Free to read at publisher website\ud \ud We study accelerated descent dynamics for constrained conve...
We study accelerated mirror descent dynamics in continuous and discrete time. Combining the original...
Online learning and convex optimization algorithms have become essential tools for solving problems ...
International audienceWe introduce and analyze a new family of first-order optimization algorithms w...
International audienceWe show that accelerated gradient descent, averaged gradient descent and the h...
We show that accelerated gradient descent, averaged gradient descent and the heavy-ball method for q...
International audienceBy analyzing accelerated proximal gradient methods under a local quadratic gro...
Averaging scheme has attracted extensive attention in deep learning as well as traditional machine l...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We investigate convex differentiable optimization and explore the temporal discretization of damped ...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
Acceleration in optimization is a term that is generally applied to optimization algorithms presenti...
First-order methods play a central role in large-scale convex optimization. Despite their various fo...
Motivated by the fact that the gradient-based optimization algorithms can be studied from the perspe...
International audienceIn a Hilbert framework, for general convex differentiable optimization, we con...
Free to read at publisher website\ud \ud We study accelerated descent dynamics for constrained conve...
We study accelerated mirror descent dynamics in continuous and discrete time. Combining the original...
Online learning and convex optimization algorithms have become essential tools for solving problems ...
International audienceWe introduce and analyze a new family of first-order optimization algorithms w...
International audienceWe show that accelerated gradient descent, averaged gradient descent and the h...
We show that accelerated gradient descent, averaged gradient descent and the heavy-ball method for q...
International audienceBy analyzing accelerated proximal gradient methods under a local quadratic gro...
Averaging scheme has attracted extensive attention in deep learning as well as traditional machine l...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We investigate convex differentiable optimization and explore the temporal discretization of damped ...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
Acceleration in optimization is a term that is generally applied to optimization algorithms presenti...
First-order methods play a central role in large-scale convex optimization. Despite their various fo...
Motivated by the fact that the gradient-based optimization algorithms can be studied from the perspe...
International audienceIn a Hilbert framework, for general convex differentiable optimization, we con...