We consider composite minimax optimization problems where the goal is to find a saddle-point of a large sum of non-bilinear objective functions augmented by simple composite regularizers for the primal and dual variables. For such problems, under the average-smoothness assumption, we propose accelerated stochastic variance-reduced algorithms with optimal up to logarithmic factors complexity bounds. In particular, we consider strongly-convex-strongly-concave, convex-strongly-concave, and convex-concave objectives. To the best of our knowledge, these are the first nearly-optimal algorithms for this setting
The dissertation addresses the research topics of machine learning outlined below. We developed the ...
We study the problem of finding a near-stationary point for smooth minimax optimization. The recent ...
In this paper, we analyze gradient-free methods with one-point feedback for stochastic saddle point ...
International audienceWe consider convex-concave saddle-point problems where the objective functions...
This thesis aims at developing efficient algorithms for solving complex and constrained convex optim...
Large-scale saddle-point problems arise in such machine learning tasks as GANs and linear models wit...
Saddle-point problems have recently gained an increased attention from the machine learning communit...
In this thesis we investigate the design and complexity analysis of the algorithms to solve convex p...
In this thesis we study iterative algorithms in order to solve constrained and unconstrained convex ...
We introduce a new algorithm, extended regularized dual averaging (XRDA), for solving regularized st...
International audienceIn this paper, we propose a unified view of gradient-based algorithms for stoc...
We introduce and analyze an algorithm for the minimization of convex functions that are the sum of d...
This thesis focuses on two topics in the field of convex optimization: preprocessing algorithms for ...
International audienceWe propose a stochastic extension of the primal-dual hybrid gradient algorithm...
We consider the fundamental problem in nonconvex optimization of efficiently reaching a stationary p...
The dissertation addresses the research topics of machine learning outlined below. We developed the ...
We study the problem of finding a near-stationary point for smooth minimax optimization. The recent ...
In this paper, we analyze gradient-free methods with one-point feedback for stochastic saddle point ...
International audienceWe consider convex-concave saddle-point problems where the objective functions...
This thesis aims at developing efficient algorithms for solving complex and constrained convex optim...
Large-scale saddle-point problems arise in such machine learning tasks as GANs and linear models wit...
Saddle-point problems have recently gained an increased attention from the machine learning communit...
In this thesis we investigate the design and complexity analysis of the algorithms to solve convex p...
In this thesis we study iterative algorithms in order to solve constrained and unconstrained convex ...
We introduce a new algorithm, extended regularized dual averaging (XRDA), for solving regularized st...
International audienceIn this paper, we propose a unified view of gradient-based algorithms for stoc...
We introduce and analyze an algorithm for the minimization of convex functions that are the sum of d...
This thesis focuses on two topics in the field of convex optimization: preprocessing algorithms for ...
International audienceWe propose a stochastic extension of the primal-dual hybrid gradient algorithm...
We consider the fundamental problem in nonconvex optimization of efficiently reaching a stationary p...
The dissertation addresses the research topics of machine learning outlined below. We developed the ...
We study the problem of finding a near-stationary point for smooth minimax optimization. The recent ...
In this paper, we analyze gradient-free methods with one-point feedback for stochastic saddle point ...