International audienceWe consider the minimization of a smooth loss with trace-norm regularization, which is a natural objective in multi-class and multi-task learning. Even though the problem is convex, existing approaches rely on optimizing a non-convex variational bound, which is not guaranteed to converge, or repeatedly perform singular-value decomposition, which prevents scaling beyond moderate matrix sizes. We lift the non-smooth convex problem into an infinitely dimensional smooth problem and apply coordinate descent to solve it. We prove that our approach converges to the optimum, and is competitive or outperforms state of the art
Abstract. We propose a unified approach to Mahalanobis metric learning: an online, regularized, posi...
Recently, many machine learning problems rely on a valuable tool: metric learning. However, in many ...
Difference-of-Convex (DC) minimization, referring to the problem of minimizing the difference of two...
We consider the minimization of a smooth loss with trace-norm regularization, which is a natural obj...
International audienceA common approach in multi-task learning is to encourage the tasks to share a ...
In geometric computer vision, the structure from motion (SfM) problem can be formulated as a optimiz...
In this paper, we analyze the convergence of two general classes of optimization algorithms for regu...
In this paper, we analyze the convergence of two general classes of optimization algorithms for regu...
Sparse learning models typically combine a smooth loss with a nonsmooth penalty, such as trace norm....
Large-scale optimization problems appear quite frequently in data science and machine learning appli...
Large-scale `1-regularized loss minimization problems arise in high-dimensional applications such as...
In this paper we study the problem of learning a low-rank (sparse) distance ma-trix. We propose a no...
Multi-task learning (MTL) seeks to improve the generalization performance by sharing information amo...
Trace-norm regularization plays an important role in many areas such as computer vision and machine ...
The paper addresses the problem of low-rank trace norm minimization. We propose an algorithm that al...
Abstract. We propose a unified approach to Mahalanobis metric learning: an online, regularized, posi...
Recently, many machine learning problems rely on a valuable tool: metric learning. However, in many ...
Difference-of-Convex (DC) minimization, referring to the problem of minimizing the difference of two...
We consider the minimization of a smooth loss with trace-norm regularization, which is a natural obj...
International audienceA common approach in multi-task learning is to encourage the tasks to share a ...
In geometric computer vision, the structure from motion (SfM) problem can be formulated as a optimiz...
In this paper, we analyze the convergence of two general classes of optimization algorithms for regu...
In this paper, we analyze the convergence of two general classes of optimization algorithms for regu...
Sparse learning models typically combine a smooth loss with a nonsmooth penalty, such as trace norm....
Large-scale optimization problems appear quite frequently in data science and machine learning appli...
Large-scale `1-regularized loss minimization problems arise in high-dimensional applications such as...
In this paper we study the problem of learning a low-rank (sparse) distance ma-trix. We propose a no...
Multi-task learning (MTL) seeks to improve the generalization performance by sharing information amo...
Trace-norm regularization plays an important role in many areas such as computer vision and machine ...
The paper addresses the problem of low-rank trace norm minimization. We propose an algorithm that al...
Abstract. We propose a unified approach to Mahalanobis metric learning: an online, regularized, posi...
Recently, many machine learning problems rely on a valuable tool: metric learning. However, in many ...
Difference-of-Convex (DC) minimization, referring to the problem of minimizing the difference of two...