We present a new method for regularized convex optimization and analyze it under both online and stochastic optimization settings. In addition to unifying previously known first-order algorithms, such as the projected gradient method, mirror descent, and forward-backward splitting, our method yields new analysis and algorithms. We also derive specific instantiations of our method for commonly used regularization functions, such as `1, mixed norm, and trace-norm.
Abstract—We propose new optimization algorithms to min-imize a sum of convex functions, which may be...
International audienceIn the paper, we develop a composite version of Mirror Prox algorithm for solv...
We consider the unconstrained optimization problem whose objective function is composed of a smooth ...
International audienceWe introduce and analyze a new family of first-order optimization algorithms w...
This monograph presents the main mathematical ideas in convex opti-mization. Starting from the funda...
Composite convex optimization models arise in several applications, and are especially prevalent in ...
This paper explores a new framework for reinforcement learning based on online convex optimization, ...
International audienceDeterministic and stochastic first order algorithms of large-scale convex opti...
We present a simple unified analysis of adaptive Mirror Descent (MD) and Follow- the-Regularized-Lea...
International audienceGiven a convex optimization problem and its dual, there are many possible firs...
Sparse modeling has been highly successful in many real-world applications. While a lot of interests...
Stochastic mirror descent (SMD) algorithms have recently garnered a great deal of attention in optim...
Large scale nonsmooth convex optimization is a common problem for a range of computational areas inc...
Sparse modeling has been highly successful in many realworld applications. While a lot of interests ...
Thesis: Ph. D. in Mathematics and Operations Research, Massachusetts Institute of Technology, Depart...
Abstract—We propose new optimization algorithms to min-imize a sum of convex functions, which may be...
International audienceIn the paper, we develop a composite version of Mirror Prox algorithm for solv...
We consider the unconstrained optimization problem whose objective function is composed of a smooth ...
International audienceWe introduce and analyze a new family of first-order optimization algorithms w...
This monograph presents the main mathematical ideas in convex opti-mization. Starting from the funda...
Composite convex optimization models arise in several applications, and are especially prevalent in ...
This paper explores a new framework for reinforcement learning based on online convex optimization, ...
International audienceDeterministic and stochastic first order algorithms of large-scale convex opti...
We present a simple unified analysis of adaptive Mirror Descent (MD) and Follow- the-Regularized-Lea...
International audienceGiven a convex optimization problem and its dual, there are many possible firs...
Sparse modeling has been highly successful in many real-world applications. While a lot of interests...
Stochastic mirror descent (SMD) algorithms have recently garnered a great deal of attention in optim...
Large scale nonsmooth convex optimization is a common problem for a range of computational areas inc...
Sparse modeling has been highly successful in many realworld applications. While a lot of interests ...
Thesis: Ph. D. in Mathematics and Operations Research, Massachusetts Institute of Technology, Depart...
Abstract—We propose new optimization algorithms to min-imize a sum of convex functions, which may be...
International audienceIn the paper, we develop a composite version of Mirror Prox algorithm for solv...
We consider the unconstrained optimization problem whose objective function is composed of a smooth ...