Recently, training with adversarial examples, which are generated by adding a small but worst-case perturbation on input examples, has improved the generalization performance of neural networks. In contrast to the biased individual inputs to enhance the generality, this paper introduces adversarial dropout, which is a minimal set of dropouts that maximize the divergence between 1) the training supervision and 2) the outputs from the network with the dropouts. The identified adversarial dropouts are used to automatically reconfigure the neural network in the training process, and we demonstrated that the simultaneous training on the original and the reconfigured network improves the generalization performance of supervised and semi-supervise...
We propose a novel technique to make neural network robust to adversarial examples using a generativ...
Adversarial training is an effective learning technique to improve the robustness of deep neural net...
Dropout training, originally designed for deep neural networks, has been successful on high-dimensio...
Successful application processing sequential data, such as text and speech, requires an improved gen...
Deep neural nets with a large number of parameters are very powerful machine learning systems. Howev...
Deep learning models have achieved an impressive performance in a variety of tasks, but they often s...
Recently it has been shown that when training neural networks on a limited amount of data, randomly ...
Dropout is one of the most popular regularization methods used in deep learning. The general form of...
The undeniable computational power of artificial neural networks has granted the scientific communit...
Recent years have witnessed the success of deep neural networks in dealing with a plenty of practica...
Dropout and other feature noising schemes control overfitting by artificially cor-rupting the traini...
Dropout and other feature noising schemes control overfitting by artificially cor-rupting the traini...
Recently, it was shown that deep neural networks perform very well if the activities of hidden units...
Adversarial examples cause neural networks to produce incorrect outputs with high confidence. Althou...
© Copyright 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rig...
We propose a novel technique to make neural network robust to adversarial examples using a generativ...
Adversarial training is an effective learning technique to improve the robustness of deep neural net...
Dropout training, originally designed for deep neural networks, has been successful on high-dimensio...
Successful application processing sequential data, such as text and speech, requires an improved gen...
Deep neural nets with a large number of parameters are very powerful machine learning systems. Howev...
Deep learning models have achieved an impressive performance in a variety of tasks, but they often s...
Recently it has been shown that when training neural networks on a limited amount of data, randomly ...
Dropout is one of the most popular regularization methods used in deep learning. The general form of...
The undeniable computational power of artificial neural networks has granted the scientific communit...
Recent years have witnessed the success of deep neural networks in dealing with a plenty of practica...
Dropout and other feature noising schemes control overfitting by artificially cor-rupting the traini...
Dropout and other feature noising schemes control overfitting by artificially cor-rupting the traini...
Recently, it was shown that deep neural networks perform very well if the activities of hidden units...
Adversarial examples cause neural networks to produce incorrect outputs with high confidence. Althou...
© Copyright 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rig...
We propose a novel technique to make neural network robust to adversarial examples using a generativ...
Adversarial training is an effective learning technique to improve the robustness of deep neural net...
Dropout training, originally designed for deep neural networks, has been successful on high-dimensio...