Data augmentation is an inexpensive way to increase training data diversityand is commonly achieved via transformations of existing data. For tasks suchas classification, there is a good case for learning representations of thedata that are invariant to such transformations, yet this is not explicitlyenforced by classification losses such as the cross-entropy loss. This paperinvestigates the use of training objectives that explicitly impose thisconsistency constraint and how it can impact downstream audio classificationtasks. In the context of deep convolutional neural networks in the supervisedsetting, we show empirically that certain measures of consistency are notimplicitly captured by the cross-entropy loss and that incorporating suchme...
Cross entropy loss has served as the main objective function for classification-based tasks. Widely ...
Convolutional neural networks represent the state of the art in multiple fields. Techniques that imp...
International audienceA grand challenge in representation learning is the development of computation...
Deep neural networks have become popular in many supervised learning tasks, but they may suffer from...
Inspired by the recent progress in self-supervised learning for computer vision, in this paper we in...
Deep learning has fueled an explosion of applications, yet training deep neural networks usually req...
Invariance-based learning is a promising approach in deep learning. Among other benefits, it can mit...
Consistency regularization is a commonly-used technique for semi-supervised and self-supervised lear...
Despite that L1 and L2 loss functions do not represent any perceptually-related information besides ...
The success of supervised deep learning methods is largely due to their ability to learn relevant fe...
The goal of this thesis is to train an artificial neural network which will be able to improve the t...
Many of today's state-of-the-art automatic speech recognition (ASR) systems are based on hybrid hidd...
Deep learning models have recently led to significant improvements in a wide variety of tasks. Known...
© 2019 Association for Computing Machinery. Generative audio models based on neural networks have le...
Recent researches reveal that deep neural networks are sensitive to label noises hence leading to po...
Cross entropy loss has served as the main objective function for classification-based tasks. Widely ...
Convolutional neural networks represent the state of the art in multiple fields. Techniques that imp...
International audienceA grand challenge in representation learning is the development of computation...
Deep neural networks have become popular in many supervised learning tasks, but they may suffer from...
Inspired by the recent progress in self-supervised learning for computer vision, in this paper we in...
Deep learning has fueled an explosion of applications, yet training deep neural networks usually req...
Invariance-based learning is a promising approach in deep learning. Among other benefits, it can mit...
Consistency regularization is a commonly-used technique for semi-supervised and self-supervised lear...
Despite that L1 and L2 loss functions do not represent any perceptually-related information besides ...
The success of supervised deep learning methods is largely due to their ability to learn relevant fe...
The goal of this thesis is to train an artificial neural network which will be able to improve the t...
Many of today's state-of-the-art automatic speech recognition (ASR) systems are based on hybrid hidd...
Deep learning models have recently led to significant improvements in a wide variety of tasks. Known...
© 2019 Association for Computing Machinery. Generative audio models based on neural networks have le...
Recent researches reveal that deep neural networks are sensitive to label noises hence leading to po...
Cross entropy loss has served as the main objective function for classification-based tasks. Widely ...
Convolutional neural networks represent the state of the art in multiple fields. Techniques that imp...
International audienceA grand challenge in representation learning is the development of computation...