International audienceThis work proposes a new learning strategy for training a feedforward neural network subject to spectral norm and nonnegativity constraints. Our primary goal is to control the Lipschitz constant of the network in order to make it robust against adversarial perturbations of its inputs. We propose a stochastic projected gradient descent algorithm which allows us to adjust this constant in the training process. The algorithm is evaluated in the context of designing a fully connected network for Automatic Gesture Recognition based on EMG signals. We perform a comparison with the same architecture trained either in a standard manner or with simpler constraints. The obtained results highlight that a good accuracy-robustness ...
We investigate a new approach to compute the gradients of artificial neural networks (ANNs), based o...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
In this thesis, we theoretically analyze the ability of neural networks trained by gradient descent ...
International audienceThis work proposes a new learning strategy for training a feedforward neural n...
International audienceThis paper introduces a novel approach for building a robust Automatic Gesture...
We investigate the effect of explicitly enforcing the Lipschitz continuity of neural networks with r...
We introduce the problem of training neural networks such that they are robust against a class of sm...
This paper discusses the stabilizability of arti®cial neural networks trained by utilizing the gradi...
Improving adversarial robustness of neural networks remains a major challenge. Fundamentally, traini...
In stochastic gradient descent (SGD) and its variants, the optimized gradient estimators may be as e...
Gradient-following learning methods can encounter problems of implementation in many applications, ...
We propose a principled framework that combines adversarial training and provable robustness verific...
Abstract: This paper presents a method for stabilizing and robustifying the artificial neural networ...
International audienceWe investigate the robustness of feed-forward neural networks when input data ...
The paper studies a stochastic extension of continuous recurrent neural networks and analyzes gradie...
We investigate a new approach to compute the gradients of artificial neural networks (ANNs), based o...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
In this thesis, we theoretically analyze the ability of neural networks trained by gradient descent ...
International audienceThis work proposes a new learning strategy for training a feedforward neural n...
International audienceThis paper introduces a novel approach for building a robust Automatic Gesture...
We investigate the effect of explicitly enforcing the Lipschitz continuity of neural networks with r...
We introduce the problem of training neural networks such that they are robust against a class of sm...
This paper discusses the stabilizability of arti®cial neural networks trained by utilizing the gradi...
Improving adversarial robustness of neural networks remains a major challenge. Fundamentally, traini...
In stochastic gradient descent (SGD) and its variants, the optimized gradient estimators may be as e...
Gradient-following learning methods can encounter problems of implementation in many applications, ...
We propose a principled framework that combines adversarial training and provable robustness verific...
Abstract: This paper presents a method for stabilizing and robustifying the artificial neural networ...
International audienceWe investigate the robustness of feed-forward neural networks when input data ...
The paper studies a stochastic extension of continuous recurrent neural networks and analyzes gradie...
We investigate a new approach to compute the gradients of artificial neural networks (ANNs), based o...
Since the discovery of the back-propagation method, many modified and new algorithms have been propo...
In this thesis, we theoretically analyze the ability of neural networks trained by gradient descent ...