First-order methods are often analyzed via their continuous-time models, where their worst-case convergence properties are usually approached via Lyapunov functions. In this work, we provide a systematic and principled approach to find and verify Lyapunov functions for classes of ordinary and stochastic differential equations. More precisely, we extend the performance estimation framework, originally proposed by Drori and Teboulle [10], to continuous-time models. We retrieve convergence results comparable to those of discrete methods using fewer assumptions and convexity inequalities, and provide new results for stochastic accelerated gradient flows
Approaches like finite differences with common random numbers, infinitesimal perturbation analysis, ...
Abstract: Stochastic dynamical systems are fundamental in state estimation, system identifi-cation a...
We develop a new continuous-time stochastic gradient descent method for optimizing over the stationa...
First-order methods are often analyzed via their continuous-time models, where their worst-case conv...
We propose new continuous-time formulations for first-order stochastic optimization algorithms such ...
Optimization is among the richest modeling languages in science. In statistics and machine learning,...
Abstract: Stochastic gradient descent is an optimisation method that combines classical gradient des...
In this article, a family of SDEs are derived as a tool to understand the behavior of numerical opti...
[en]Convergence of a stochastic process is an intrinsic property quite relevant for its successful p...
We analyze the global and local behavior of gradient-like flows under stochastic errors towards the ...
We study accelerated mirror descent dynamics in continuous and discrete time. Combining the original...
94 pages, 4 figuresThis paper proposes a thorough theoretical analysis of Stochastic Gradient Descen...
Prediction and filtering of continuous-time stochastic processes require a solver of a continuous-t...
Motivated by the fact that the gradient-based optimization algorithms can be studied from the perspe...
Stochastic gradient descent (SGD) optimization algorithms are key ingredients in a series of machine...
Approaches like finite differences with common random numbers, infinitesimal perturbation analysis, ...
Abstract: Stochastic dynamical systems are fundamental in state estimation, system identifi-cation a...
We develop a new continuous-time stochastic gradient descent method for optimizing over the stationa...
First-order methods are often analyzed via their continuous-time models, where their worst-case conv...
We propose new continuous-time formulations for first-order stochastic optimization algorithms such ...
Optimization is among the richest modeling languages in science. In statistics and machine learning,...
Abstract: Stochastic gradient descent is an optimisation method that combines classical gradient des...
In this article, a family of SDEs are derived as a tool to understand the behavior of numerical opti...
[en]Convergence of a stochastic process is an intrinsic property quite relevant for its successful p...
We analyze the global and local behavior of gradient-like flows under stochastic errors towards the ...
We study accelerated mirror descent dynamics in continuous and discrete time. Combining the original...
94 pages, 4 figuresThis paper proposes a thorough theoretical analysis of Stochastic Gradient Descen...
Prediction and filtering of continuous-time stochastic processes require a solver of a continuous-t...
Motivated by the fact that the gradient-based optimization algorithms can be studied from the perspe...
Stochastic gradient descent (SGD) optimization algorithms are key ingredients in a series of machine...
Approaches like finite differences with common random numbers, infinitesimal perturbation analysis, ...
Abstract: Stochastic dynamical systems are fundamental in state estimation, system identifi-cation a...
We develop a new continuous-time stochastic gradient descent method for optimizing over the stationa...