We start by demonstrating that an elementary learning task—learning a linear filter from training data by means of regression—can be solved very efficiently for feature spaces of very high dimensionality. In a second step, firstly, acknowledging that such high-dimensional learning tasks typically benefit from some form of regularization and, secondly, arguing that the problem of scale has not been taken care of in a very satisfactory manner, we come to a combined resolution of both of these shortcomings by proposing a technique that we coin scale regularization. This regularization problem can also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a speci...
In several supervised learning applications, it happens that reconstruction methods have to be appli...
International audienceLow-complexity non-smooth convex regular-izers are routinely used to impose so...
We consider supervised learning in the presence of very many irrelevant features, and study two diff...
The millions of filter weights in Convolutional Neural Networks (CNNs), all have a well-defined and ...
In this paper we discuss a relation between Learning Theory and Regularization of linear ill-posed i...
Modern data-sets are often huge, possibly high-dimensional, and require complex non-linear parameter...
AbstractIn this paper we discuss a relation between Learning Theory and Regularization of linear ill...
We propose a method for the approximation of high- or even infinite-dimensional feature vectors, whi...
obtained from the application of Tychonov regulariza-tion or Bayes estimation to the hypersurface re...
We consider a learning algorithm generated by a regularization scheme with a concave regularizer for...
Learning approaches have recently become very popular in the field of inverse problems. A large vari...
International audienceAs annotations of data can be scarce in large-scale practical problems, levera...
Regularizing the gradient norm of the output of a neural network is a powerful technique, rediscover...
In this work we study performances of different machine learning models by focusing on regularizatio...
Problems in machine learning (ML) can involve noisy input data, and ML classification methods have r...
In several supervised learning applications, it happens that reconstruction methods have to be appli...
International audienceLow-complexity non-smooth convex regular-izers are routinely used to impose so...
We consider supervised learning in the presence of very many irrelevant features, and study two diff...
The millions of filter weights in Convolutional Neural Networks (CNNs), all have a well-defined and ...
In this paper we discuss a relation between Learning Theory and Regularization of linear ill-posed i...
Modern data-sets are often huge, possibly high-dimensional, and require complex non-linear parameter...
AbstractIn this paper we discuss a relation between Learning Theory and Regularization of linear ill...
We propose a method for the approximation of high- or even infinite-dimensional feature vectors, whi...
obtained from the application of Tychonov regulariza-tion or Bayes estimation to the hypersurface re...
We consider a learning algorithm generated by a regularization scheme with a concave regularizer for...
Learning approaches have recently become very popular in the field of inverse problems. A large vari...
International audienceAs annotations of data can be scarce in large-scale practical problems, levera...
Regularizing the gradient norm of the output of a neural network is a powerful technique, rediscover...
In this work we study performances of different machine learning models by focusing on regularizatio...
Problems in machine learning (ML) can involve noisy input data, and ML classification methods have r...
In several supervised learning applications, it happens that reconstruction methods have to be appli...
International audienceLow-complexity non-smooth convex regular-izers are routinely used to impose so...
We consider supervised learning in the presence of very many irrelevant features, and study two diff...