In this paper, we analyze the convergence of two general classes of optimization algorithms for regularized kernel methods with convex loss function and quadratic norm regularization. The first methodology is a new class of algorithms based on fixed-point iterations that are well-suited for a parallel implementation and can be used with any convex loss function. The second methodology is based on coordinate descent, and generalizes some techniques previously proposed for linear support vector machines. It exploits the structure of additively separable loss functions to compute solutions of line searches in closed form. The two methodologies are both very easy to implement. In this paper, we also show how to remove non-differentiability of t...
We present a kernel-based framework for pattern recognition, regression estimation, function approxi...
Large-scale `1-regularized loss minimization problems arise in high-dimensional applications such as...
Coordinate descent with random coordinate selection is the current state of the art for many large s...
In this paper, we analyze the convergence of two general classes of optimization algorithms for regu...
In this paper, we analyze the convergence of two general classes of optimization algorithms for regu...
The representer theorem for kernel methods states that the solution of the associated variational pr...
This work addresses the problem of regularized linear least squares (RLS) with non-quadratic separab...
In this paper we consider the regularized version of the Jacobi algorithm, a block coordinate descen...
International audienceWe consider the minimization of a smooth loss with trace-norm regularization, ...
AbstractThis work addresses the problem of regularized linear least squares (RLS) with non-quadratic...
We propose a method to learn simultaneously a vector-valued function and a kernel between its compon...
We propose a method to learn simultaneously a vector-valued function and a kernel between its compon...
The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKH...
Support Vector (SV) Machines combine several techniques from statistics, machine learning and neural...
In many machine learning problems such as the dual form of SVM, the objective function to be minimiz...
We present a kernel-based framework for pattern recognition, regression estimation, function approxi...
Large-scale `1-regularized loss minimization problems arise in high-dimensional applications such as...
Coordinate descent with random coordinate selection is the current state of the art for many large s...
In this paper, we analyze the convergence of two general classes of optimization algorithms for regu...
In this paper, we analyze the convergence of two general classes of optimization algorithms for regu...
The representer theorem for kernel methods states that the solution of the associated variational pr...
This work addresses the problem of regularized linear least squares (RLS) with non-quadratic separab...
In this paper we consider the regularized version of the Jacobi algorithm, a block coordinate descen...
International audienceWe consider the minimization of a smooth loss with trace-norm regularization, ...
AbstractThis work addresses the problem of regularized linear least squares (RLS) with non-quadratic...
We propose a method to learn simultaneously a vector-valued function and a kernel between its compon...
We propose a method to learn simultaneously a vector-valued function and a kernel between its compon...
The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKH...
Support Vector (SV) Machines combine several techniques from statistics, machine learning and neural...
In many machine learning problems such as the dual form of SVM, the objective function to be minimiz...
We present a kernel-based framework for pattern recognition, regression estimation, function approxi...
Large-scale `1-regularized loss minimization problems arise in high-dimensional applications such as...
Coordinate descent with random coordinate selection is the current state of the art for many large s...