Deep neural networks (DNNs) typically have many weights. When errors appear in their weights, which are usually stored in non-volatile memories, their performance can degrade significantly. We review two recently presented approaches that improve the robustness of DNNs in complementary ways. In the first approach, we use error-correcting codes as external redundancy to protect the weights from errors. A deep reinforcement learning algorithm is used to optimize the redundancy-performance tradeoff. In the second approach, internal redundancy is added to neurons via coding. It enables neurons to perform robust inference in noisy environments
This work studies the sensitivity of neural networks to weight perturbations, firstly corresponding ...
The latest Deep Learning (DL) methods for designing Deep Neural Networks (DNN) have significantly ex...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...
Deep neural networks (DNNs) typically have many weights. When errors appear in their weights, which ...
International audienceIn the literature, it is argued that Deep Neural Networks (DNNs) possess a cer...
In the last decade, deep neural networks have achieved tremendous success in many fields of machine ...
Deep neural networks have achieved state-of-the-art performance in many artificial intelligence area...
The recent success of deep neural networks (DNNs) in challenging perception tasks makes them a power...
When neural networks (NeuralNets) are implemented in hardware, their weights need to be stored in me...
Deep learning has seen tremendous growth, largely fueled by more powerful computers, the availabilit...
Deep learning research has recently witnessed an impressively fast-paced progress in a wide range of...
International audienceFor many types of integrated circuits, accepting larger failure rates in compu...
In this paper we address the issue of output instability of deep neural networks: small perturbation...
Deep Neural Networks (DNNs) are a revolutionary force in the ongoing information revolution, and yet...
International audienceThis paper presents a DNN bottleneck reinforcement scheme to alleviate the vul...
This work studies the sensitivity of neural networks to weight perturbations, firstly corresponding ...
The latest Deep Learning (DL) methods for designing Deep Neural Networks (DNN) have significantly ex...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...
Deep neural networks (DNNs) typically have many weights. When errors appear in their weights, which ...
International audienceIn the literature, it is argued that Deep Neural Networks (DNNs) possess a cer...
In the last decade, deep neural networks have achieved tremendous success in many fields of machine ...
Deep neural networks have achieved state-of-the-art performance in many artificial intelligence area...
The recent success of deep neural networks (DNNs) in challenging perception tasks makes them a power...
When neural networks (NeuralNets) are implemented in hardware, their weights need to be stored in me...
Deep learning has seen tremendous growth, largely fueled by more powerful computers, the availabilit...
Deep learning research has recently witnessed an impressively fast-paced progress in a wide range of...
International audienceFor many types of integrated circuits, accepting larger failure rates in compu...
In this paper we address the issue of output instability of deep neural networks: small perturbation...
Deep Neural Networks (DNNs) are a revolutionary force in the ongoing information revolution, and yet...
International audienceThis paper presents a DNN bottleneck reinforcement scheme to alleviate the vul...
This work studies the sensitivity of neural networks to weight perturbations, firstly corresponding ...
The latest Deep Learning (DL) methods for designing Deep Neural Networks (DNN) have significantly ex...
In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs)...