Since the Lipschitz properties of convolutional neural networks (CNNs) are widely considered to be related to adversarial robustness, we theoretically characterize the L-1 norm and L-infinity norm of 2D multi-channel convolutional layers and provide efficient methods to compute the exact L-1 norm and L-infinity norm. Based on our theorem, we propose a novel regularization method termed norm decay, which can effectively reduce the norms of convolutional layers and fully-connected layers. Experiments show that norm-regularization methods, including norm decay, weight decay, and singular value clipping, can improve generalization of CNNs. However, they can slightly hurt adversarial robustness. Observing this unexpected phenomenon, we compute t...
International audienceBuilding nonexpansive Convolutional Neural Networks (CNNs) is a challenging pr...
Recently smoothing deep neural network based classifiers via isotropic Gaussian perturbation is show...
Analysing Generalisation Error Bounds for Convolutional Neural Networks Abstract: Convolutional neur...
Deep learning has seen tremendous growth, largely fueled by more powerful computers, the availabilit...
Poster for Adversarial Robustness through the Lens of Convolutional Filters Abstract: Deep learning...
The robustness of neural networks can be quantitatively indicated by a lower bound within which any ...
Recent work has shown that state-of-the-art classifiers are quite brittle, in the sense that a small...
Deep learning models are intrinsically sensitive to distribution shifts in the input data. In partic...
The reliability of deep learning algorithms is fundamentally challenged by the existence of adversar...
Recent studies on the adversarial vulnerability of neural networks have shown that models trained wi...
Recent discoveries uncovered flaws in machine learning algorithms such as deep neural networks. Deep...
It is of significant importance for any classification and recognition system, which claims near or ...
Neural networks are known to be highly sensitive to adversarial examples. These may arise due to dif...
We propose Absum, which is a regularization method for improving adversarial robustness of convoluti...
Adversarial training (AT) is currently one of the most successful methods to obtain the adversarial ...
International audienceBuilding nonexpansive Convolutional Neural Networks (CNNs) is a challenging pr...
Recently smoothing deep neural network based classifiers via isotropic Gaussian perturbation is show...
Analysing Generalisation Error Bounds for Convolutional Neural Networks Abstract: Convolutional neur...
Deep learning has seen tremendous growth, largely fueled by more powerful computers, the availabilit...
Poster for Adversarial Robustness through the Lens of Convolutional Filters Abstract: Deep learning...
The robustness of neural networks can be quantitatively indicated by a lower bound within which any ...
Recent work has shown that state-of-the-art classifiers are quite brittle, in the sense that a small...
Deep learning models are intrinsically sensitive to distribution shifts in the input data. In partic...
The reliability of deep learning algorithms is fundamentally challenged by the existence of adversar...
Recent studies on the adversarial vulnerability of neural networks have shown that models trained wi...
Recent discoveries uncovered flaws in machine learning algorithms such as deep neural networks. Deep...
It is of significant importance for any classification and recognition system, which claims near or ...
Neural networks are known to be highly sensitive to adversarial examples. These may arise due to dif...
We propose Absum, which is a regularization method for improving adversarial robustness of convoluti...
Adversarial training (AT) is currently one of the most successful methods to obtain the adversarial ...
International audienceBuilding nonexpansive Convolutional Neural Networks (CNNs) is a challenging pr...
Recently smoothing deep neural network based classifiers via isotropic Gaussian perturbation is show...
Analysing Generalisation Error Bounds for Convolutional Neural Networks Abstract: Convolutional neur...