International audienceMost Deep neural networks use ReLU activation functions. Since these functions are not differentiable in 0, we may believe that such models may have irregular behavior. In this paper, we will show that the issue is more in the data than in the model, and if the data are “smooth”, the model will be differentiable in a suitable sense. We give a striking illustration of this fact with the example of adversarial attacks
Adversarial reprogramming, introduced by Elsayed, Goodfellow, and Sohl-Dickstein, seeks to repurpose...
The activation function plays an important role in training and improving performance in deep neural...
Deep learning is a machine learning technique that enables computers to learn directly from images, ...
International audienceMost Deep neural networks use ReLU activation functions. Since these functions...
The possibility for one to recover the parameters-weights and biases-of a neural network thanks to t...
Deep neural networks have proven remarkably effective at solving many classification problems, but h...
Deep learning has seen tremendous growth, largely fueled by more powerful computers, the availabilit...
The generalization capabilities of deep neural networks are not well understood, and in particular, ...
The reliability of deep learning algorithms is fundamentally challenged by the existence of adversar...
Recent years have witnessed the remarkable success of deep neural network (DNN) models spanning a wi...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...
Since in the physical world, most dependencies are smooth (differentiable), traditionally, smooth fu...
Adversarial reprogramming, introduced by Elsayed, Goodfellow, and Sohl-Dickstein, seeks to repurpose...
We consider neural networks with rational activation functions. The choice of the nonlinear activati...
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorl...
Adversarial reprogramming, introduced by Elsayed, Goodfellow, and Sohl-Dickstein, seeks to repurpose...
The activation function plays an important role in training and improving performance in deep neural...
Deep learning is a machine learning technique that enables computers to learn directly from images, ...
International audienceMost Deep neural networks use ReLU activation functions. Since these functions...
The possibility for one to recover the parameters-weights and biases-of a neural network thanks to t...
Deep neural networks have proven remarkably effective at solving many classification problems, but h...
Deep learning has seen tremendous growth, largely fueled by more powerful computers, the availabilit...
The generalization capabilities of deep neural networks are not well understood, and in particular, ...
The reliability of deep learning algorithms is fundamentally challenged by the existence of adversar...
Recent years have witnessed the remarkable success of deep neural network (DNN) models spanning a wi...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...
Since in the physical world, most dependencies are smooth (differentiable), traditionally, smooth fu...
Adversarial reprogramming, introduced by Elsayed, Goodfellow, and Sohl-Dickstein, seeks to repurpose...
We consider neural networks with rational activation functions. The choice of the nonlinear activati...
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorl...
Adversarial reprogramming, introduced by Elsayed, Goodfellow, and Sohl-Dickstein, seeks to repurpose...
The activation function plays an important role in training and improving performance in deep neural...
Deep learning is a machine learning technique that enables computers to learn directly from images, ...