International audienceMotivated by applications to machine learning and imaging science, we study a class of online and stochastic optimization problems with loss functions that are not Lipschitz continuous; in particular, the loss functions encountered by the optimizer could exhibit gradient singularities or be singular themselves. Drawing on tools and techniques from Riemannian geometry, we examine a Riemann-Lipschitz (RL) continuity condition which is tailored to the singularity landscape of the problem's loss functions. In this way, we are able to tackle cases beyond the Lipschitz framework provided by a global norm, and we derive optimal regret bounds and last iterate convergence results through the use of regularized learning methods ...
The framework of online learning with memory naturally captures learning problems with temporal effe...
This paper develops a methodology for regret minimization with stochastic first-order oracle feedbac...
The study of online convex optimization in the bandit setting was initiated by Klein-berg (2004) and...
International audienceMotivated by applications to machine learning and imaging science, we study a ...
Online convex optimization (OCO) is a powerful algorithmic framework that has extensive applications...
Stochastic mirror descent (SMD) algorithms have recently garnered a great deal of attention in optim...
Modern applications in sensitive domains such as biometrics and medicine frequently require the use ...
We study online optimization of smoothed piecewise constant functions over the domain [0, 1). This i...
We present a simple unified analysis of adaptive Mirror Descent (MD) and Follow- the-Regularized-Lea...
International audienceWe propose a new family of adaptive first-order methods for a class of convex ...
This dissertation presents several contributions at the interface of methods for convex optimization...
Modern applications in sensitive domains such as biometrics and medicine frequently require the use ...
Several important problems in learning theory and data science involve high-dimensional optimization...
Stochastic and adversarial data are two widely studied settings in online learning. But many optimiz...
International audienceStochastic approximation techniques have been used in various contexts in data...
The framework of online learning with memory naturally captures learning problems with temporal effe...
This paper develops a methodology for regret minimization with stochastic first-order oracle feedbac...
The study of online convex optimization in the bandit setting was initiated by Klein-berg (2004) and...
International audienceMotivated by applications to machine learning and imaging science, we study a ...
Online convex optimization (OCO) is a powerful algorithmic framework that has extensive applications...
Stochastic mirror descent (SMD) algorithms have recently garnered a great deal of attention in optim...
Modern applications in sensitive domains such as biometrics and medicine frequently require the use ...
We study online optimization of smoothed piecewise constant functions over the domain [0, 1). This i...
We present a simple unified analysis of adaptive Mirror Descent (MD) and Follow- the-Regularized-Lea...
International audienceWe propose a new family of adaptive first-order methods for a class of convex ...
This dissertation presents several contributions at the interface of methods for convex optimization...
Modern applications in sensitive domains such as biometrics and medicine frequently require the use ...
Several important problems in learning theory and data science involve high-dimensional optimization...
Stochastic and adversarial data are two widely studied settings in online learning. But many optimiz...
International audienceStochastic approximation techniques have been used in various contexts in data...
The framework of online learning with memory naturally captures learning problems with temporal effe...
This paper develops a methodology for regret minimization with stochastic first-order oracle feedbac...
The study of online convex optimization in the bandit setting was initiated by Klein-berg (2004) and...