In the context of supervised learning of a function by a neural network, we claim and empirically verify that the neural network yields better results when the distribution of the data set focuses on regions where the function to learn is steep. We first traduce this assumption in a mathematically workable way using Taylor expansion and emphasize a new training distribution based on the derivatives of the function to learn. Then, theoretical derivations allow constructing a methodology that we call Variance Based Samples Weighting (VBSW). VBSW uses labels local variance to weight the training points. This methodology is general, scalable, cost-effective, and significantly increases the performances of a large class of neural networks for va...
Inspired by two basic mechanisms in animal visual systems, we introduce a feature transform techniqu...
31 pages, 14 figures, 11 tablesInternational audienceStandard neural networks struggle to generalize...
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical ques...
International audienceIn the context of supervised learning of a function by a neural network, we cl...
Training deep neural networks requires many training samples, but in practice, training labels are e...
We introduce a probability distribution, combined with an efficient sampling algorithm, for weights ...
Deep neural network training spends most of the computation on examples that are properly handled, a...
We study the theory of neural network (NN) from the lens of classical nonparametric regression probl...
The performance of deep learning (DL) models is highly dependent on the quality and size of the trai...
Modern machine learning models are complex, hierarchical, and large-scale and are trained using non-...
We present a novel regularization approach to train neural networks that enjoys better generalizatio...
Abstract We present weight normalization: a reparameterization of the weight vectors in a neural net...
Deep learning uses neural networks which are parameterised by their weights. The neural networks ar...
Deep learning has outperformed other machine learning algorithms in a variety of tasks, and as a res...
Deep Learning (DL) methods have emerged as one of the most powerful tools for functional approximati...
Inspired by two basic mechanisms in animal visual systems, we introduce a feature transform techniqu...
31 pages, 14 figures, 11 tablesInternational audienceStandard neural networks struggle to generalize...
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical ques...
International audienceIn the context of supervised learning of a function by a neural network, we cl...
Training deep neural networks requires many training samples, but in practice, training labels are e...
We introduce a probability distribution, combined with an efficient sampling algorithm, for weights ...
Deep neural network training spends most of the computation on examples that are properly handled, a...
We study the theory of neural network (NN) from the lens of classical nonparametric regression probl...
The performance of deep learning (DL) models is highly dependent on the quality and size of the trai...
Modern machine learning models are complex, hierarchical, and large-scale and are trained using non-...
We present a novel regularization approach to train neural networks that enjoys better generalizatio...
Abstract We present weight normalization: a reparameterization of the weight vectors in a neural net...
Deep learning uses neural networks which are parameterised by their weights. The neural networks ar...
Deep learning has outperformed other machine learning algorithms in a variety of tasks, and as a res...
Deep Learning (DL) methods have emerged as one of the most powerful tools for functional approximati...
Inspired by two basic mechanisms in animal visual systems, we introduce a feature transform techniqu...
31 pages, 14 figures, 11 tablesInternational audienceStandard neural networks struggle to generalize...
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical ques...