Deep neural networks have become popular in many supervised learning tasks, but they may suffer from overfitting when the training dataset is limited. To mitigate this, many researchers use data augmentation, which is a widely used and effective method for increasing the variety of datasets. However, the randomness introduced by data augmentation causes inevitable inconsistency between training and inference, which leads to poor improvement. In this paper, we propose a consistency regularization framework based on data augmentation, called CR-Aug, which forces the output distributions of different sub models generated by data augmentation to be consistent with each other. Specifically, CR-Aug evaluates the discrepancy between the output dis...
Recent work has increased the performance of Generative Adversarial Networks (GANs) by enforcing a c...
Regularizing the gradient norm of the output of a neural network is a powerful technique, rediscover...
Most complex machine learning and modelling techniques are prone to overfitting and may subsequently...
Data augmentation is popular in the training of large neural networks; currently, however, there is ...
Adversarial training (AT) is currently one of the most successful methods to obtain the adversarial ...
Data augmentation is an inexpensive way to increase training data diversityand is commonly achieved ...
We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm ...
Graph neural networks (GNNs) have demonstrated superior performance in various tasks on graphs. Howe...
Despite powerful representation ability, deep neural networks (DNNs) are prone to over-fitting, beca...
Currently deep learning requires large volumes of training data to fit accurate models. In practice,...
To expand the size of a real dataset, data augmentation techniques artificially create various versi...
We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm ...
Deep Learning, the learning of deep neural networks, is nowadays indispensable not only in the field...
Consistency regularization on label predictions becomes a fundamental technique in semi-supervised l...
Consistency regularization is a commonly-used technique for semi-supervised and self-supervised lear...
Recent work has increased the performance of Generative Adversarial Networks (GANs) by enforcing a c...
Regularizing the gradient norm of the output of a neural network is a powerful technique, rediscover...
Most complex machine learning and modelling techniques are prone to overfitting and may subsequently...
Data augmentation is popular in the training of large neural networks; currently, however, there is ...
Adversarial training (AT) is currently one of the most successful methods to obtain the adversarial ...
Data augmentation is an inexpensive way to increase training data diversityand is commonly achieved ...
We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm ...
Graph neural networks (GNNs) have demonstrated superior performance in various tasks on graphs. Howe...
Despite powerful representation ability, deep neural networks (DNNs) are prone to over-fitting, beca...
Currently deep learning requires large volumes of training data to fit accurate models. In practice,...
To expand the size of a real dataset, data augmentation techniques artificially create various versi...
We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm ...
Deep Learning, the learning of deep neural networks, is nowadays indispensable not only in the field...
Consistency regularization on label predictions becomes a fundamental technique in semi-supervised l...
Consistency regularization is a commonly-used technique for semi-supervised and self-supervised lear...
Recent work has increased the performance of Generative Adversarial Networks (GANs) by enforcing a c...
Regularizing the gradient norm of the output of a neural network is a powerful technique, rediscover...
Most complex machine learning and modelling techniques are prone to overfitting and may subsequently...