Applications abound in which optimization problems must be repeatedly solved, each time with new (but similar) data. Analytic optimization algorithms can be hand-designed to provably solve these problems in an iterative fashion. On one hand, data-driven algorithms can "learn to optimize" (L2O) with much fewer iterations and similar cost per iteration as general-purpose optimization algorithms. On the other hand, unfortunately, many L2O algorithms lack converge guarantees. To fuse the advantages of these approaches, we present a Safe-L2O framework. Safe-L2O updates incorporate a safeguard to guarantee convergence for convex problems with proximal and/or gradient oracles. The safeguard is simple and computationally cheap to implement, and it ...
We consider unconstrained randomized optimization of smooth convex functions in the gradient-free se...
We present novel, efficient algorithms for solving extremely large optimization problems. A signific...
The dissertation addresses the research topics of machine learning outlined below. We developed the ...
Applications abound in which optimization problems must be repeatedly solved, each time with new (bu...
Machine learning (ML) has evolved dramatically over recent decades, from relative infancy to a pract...
<p>The rapid growth in data availability has led to modern large scale convex optimization problems ...
Thesis: Ph. D. in Mathematics and Operations Research, Massachusetts Institute of Technology, Depart...
In many machine learning problems, an objective is required to be optimized with respect to some co...
Standard stochastic optimization methods are brittle, sensitive to stepsize choices and other algori...
International audienceThe efficiency of modern optimization methods, coupled with increasing computa...
Motivated by recent work of Renegar, we present new computational methods and associated computation...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
AbstractInterval methods have shown their ability to locate and prove the existence of a global opti...
Gradient boosting is a state-of-the-art prediction technique that sequentially produces a model in t...
Recently, there has been a surge of interest in incorporating tools from dynamical systems and contr...
We consider unconstrained randomized optimization of smooth convex functions in the gradient-free se...
We present novel, efficient algorithms for solving extremely large optimization problems. A signific...
The dissertation addresses the research topics of machine learning outlined below. We developed the ...
Applications abound in which optimization problems must be repeatedly solved, each time with new (bu...
Machine learning (ML) has evolved dramatically over recent decades, from relative infancy to a pract...
<p>The rapid growth in data availability has led to modern large scale convex optimization problems ...
Thesis: Ph. D. in Mathematics and Operations Research, Massachusetts Institute of Technology, Depart...
In many machine learning problems, an objective is required to be optimized with respect to some co...
Standard stochastic optimization methods are brittle, sensitive to stepsize choices and other algori...
International audienceThe efficiency of modern optimization methods, coupled with increasing computa...
Motivated by recent work of Renegar, we present new computational methods and associated computation...
We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on ...
AbstractInterval methods have shown their ability to locate and prove the existence of a global opti...
Gradient boosting is a state-of-the-art prediction technique that sequentially produces a model in t...
Recently, there has been a surge of interest in incorporating tools from dynamical systems and contr...
We consider unconstrained randomized optimization of smooth convex functions in the gradient-free se...
We present novel, efficient algorithms for solving extremely large optimization problems. A signific...
The dissertation addresses the research topics of machine learning outlined below. We developed the ...