International audienceWe design stochastic Difference-of-Convex-functions Algorithms (DCA) for solving a class of structured Difference-of-Convex-functions (DC) problems. As the standard DCA requires the full information of (sub)gradients which could be expensive in large-scale settings, stochastic approaches rely upon stochastic information instead. However, stochastic estimations generate additional variance terms making stochastic algorithms unstable. Therefore, we integrate some novel variance reduction techniques including SVRG and SAGA into our design. The almost sure convergence to critical points of the proposed algorithms is established and the algorithms' complexities are analyzed. To study the efficiency of our algorithms, we app...
Stochastic gradient descent is popular for large scale optimization but has slow convergence asympto...
International audienceBilevel optimization, the problem of minimizing a value function which involve...
Stochastic approximation is one of the effective approach to deal with the large-scale machine learn...
De nos jours, avec l'abondance croissante de données de très grande taille, les problèmes de classif...
These days with the increasing abundance of data with high dimensionality, high dimensional classifi...
International audienceThe paper deals with stochastic difference-of-convex functions (DC) programs, ...
This work considers optimization methods for large-scale machine learning (ML). Optimization in ML ...
Stochastic gradient descent is popular for large scale optimization but has slow convergence asympto...
<p>Stochastic gradient optimization is a class of widely used algorithms for training machine learni...
Stochastic gradient optimization is a class of widely used algorithms for training machine learning ...
© 1989-2012 IEEE. In this paper, we propose a simple variant of the original SVRG, called variance r...
International audienceWe consider convex-concave saddle-point problems where the objective functions...
De nos jours, le Big Data est devenu essentiel et omniprésent dans tous les domaines. Par conséquenc...
Stochastic approximation (SA) is a classical algorithm that has had since the early days a huge impa...
University of Minnesota Ph.D. dissertation. April 2020. Major: Computer Science. Advisor: Arindam Ba...
Stochastic gradient descent is popular for large scale optimization but has slow convergence asympto...
International audienceBilevel optimization, the problem of minimizing a value function which involve...
Stochastic approximation is one of the effective approach to deal with the large-scale machine learn...
De nos jours, avec l'abondance croissante de données de très grande taille, les problèmes de classif...
These days with the increasing abundance of data with high dimensionality, high dimensional classifi...
International audienceThe paper deals with stochastic difference-of-convex functions (DC) programs, ...
This work considers optimization methods for large-scale machine learning (ML). Optimization in ML ...
Stochastic gradient descent is popular for large scale optimization but has slow convergence asympto...
<p>Stochastic gradient optimization is a class of widely used algorithms for training machine learni...
Stochastic gradient optimization is a class of widely used algorithms for training machine learning ...
© 1989-2012 IEEE. In this paper, we propose a simple variant of the original SVRG, called variance r...
International audienceWe consider convex-concave saddle-point problems where the objective functions...
De nos jours, le Big Data est devenu essentiel et omniprésent dans tous les domaines. Par conséquenc...
Stochastic approximation (SA) is a classical algorithm that has had since the early days a huge impa...
University of Minnesota Ph.D. dissertation. April 2020. Major: Computer Science. Advisor: Arindam Ba...
Stochastic gradient descent is popular for large scale optimization but has slow convergence asympto...
International audienceBilevel optimization, the problem of minimizing a value function which involve...
Stochastic approximation is one of the effective approach to deal with the large-scale machine learn...