Incorporating invariances into a learning algorithm is a common problem in machine learning. We provide a convex formulation which can deal with arbitrary loss functions and arbitrary losses. In addition, it is a drop-in replacement for most optimization algorithms for kernels, including solvers of the SVMStruct family. The advantage of our setting is that it relies on column generation instead of modifying the underlying optimization problem directly
This is an expanded version of the original document. The new appendix discusses previous works whos...
We consider the problem of supervised learning with convex loss functions and propose a new form of ...
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searchi...
Incorporating invariances into a learning algorithm is a common problem in ma-chine learning. We pro...
Invariance and representation learning are important precursors to modeling and classi- cation too...
Incorporating invariance information is important for many learning problems. To exploit invariances...
Convex learning algorithms, such as Support Vector Machines (SVMs), are often seen as highly desirab...
The design of convex, calibrated surrogate losses, whose minimization entails consistency with respe...
AbstractLearning gradients is one approach for variable selection and feature covariation estimation...
In this paper, we show that one-class SVMs can also utilize data covariance in a robust manner to im...
We study consistency properties of surrogate loss functions for general multiclass learning problems...
The design of convex, calibrated surrogate losses, whose minimization entails consistency with respe...
We derive a general Convex Linearly Con-strained Program (CLCP) parameterized by a matrix G, constru...
Loss functions are central to machine learning because they are the means by which the quality of a ...
This monograph presents the main mathematical ideas in convex opti-mization. Starting from the funda...
This is an expanded version of the original document. The new appendix discusses previous works whos...
We consider the problem of supervised learning with convex loss functions and propose a new form of ...
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searchi...
Incorporating invariances into a learning algorithm is a common problem in ma-chine learning. We pro...
Invariance and representation learning are important precursors to modeling and classi- cation too...
Incorporating invariance information is important for many learning problems. To exploit invariances...
Convex learning algorithms, such as Support Vector Machines (SVMs), are often seen as highly desirab...
The design of convex, calibrated surrogate losses, whose minimization entails consistency with respe...
AbstractLearning gradients is one approach for variable selection and feature covariation estimation...
In this paper, we show that one-class SVMs can also utilize data covariance in a robust manner to im...
We study consistency properties of surrogate loss functions for general multiclass learning problems...
The design of convex, calibrated surrogate losses, whose minimization entails consistency with respe...
We derive a general Convex Linearly Con-strained Program (CLCP) parameterized by a matrix G, constru...
Loss functions are central to machine learning because they are the means by which the quality of a ...
This monograph presents the main mathematical ideas in convex opti-mization. Starting from the funda...
This is an expanded version of the original document. The new appendix discusses previous works whos...
We consider the problem of supervised learning with convex loss functions and propose a new form of ...
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searchi...