International audienceNeural networks can be costly in terms of memory and execu-tion time. Reducing their cost has become an objective, especially whenintegrated in an embedded system with limited resources. A possiblesolution consists in reducing the precision of their neurons parameters.In this article, we present how to use auto-tuning on neural networks tolower their precision while keeping an accurate output. To do so, we use afloating-point auto-tuning tool on different kinds of neural networks. Weshow that, to some extent, we can lower the precision of several neuralnetwork parameters without compromising the accuracy requirement
Hardware accelerators for Deep Neural Networks (DNNs) that use reduced precision parameters are more...
A recurring problem faced when training neural networks is that there is typically not enough data t...
International audienceThe ever-growing cost of both training and inference for state-of-the-art neur...
To get the most out of powerful tools expert knowledge is often required. Experts are the ones with ...
Approximate computing has emerged as a promising approach to energy-efficient design of digital syst...
We explore unique considerations involved in fitting machine learning (ML) models to data with very ...
The acclaimed successes of neural networks often overshadow their tremendous complexity. We focus on...
Neural networks performance has been significantly improved in the last few years, at the cost of an...
International audienceDeep Neural Networks (DNN) represent a performance-hungry application. Floatin...
A recent servey (1) has reported that the majority of industrial loops are controlled by PID-type co...
The last decade has witnessed the breakthrough of deep neural networks (DNNs) in many fields. With t...
Neural networks are increasingly being used as components in safety-critical applications, for insta...
The development of deep learning has led to a dramatic increase in the number of applications of art...
In this paper, we presented a self-tuning control algorithm based on a three layers perceptron type ...
The use of low numerical precision is a fundamental optimization included in modern accelerators for...
Hardware accelerators for Deep Neural Networks (DNNs) that use reduced precision parameters are more...
A recurring problem faced when training neural networks is that there is typically not enough data t...
International audienceThe ever-growing cost of both training and inference for state-of-the-art neur...
To get the most out of powerful tools expert knowledge is often required. Experts are the ones with ...
Approximate computing has emerged as a promising approach to energy-efficient design of digital syst...
We explore unique considerations involved in fitting machine learning (ML) models to data with very ...
The acclaimed successes of neural networks often overshadow their tremendous complexity. We focus on...
Neural networks performance has been significantly improved in the last few years, at the cost of an...
International audienceDeep Neural Networks (DNN) represent a performance-hungry application. Floatin...
A recent servey (1) has reported that the majority of industrial loops are controlled by PID-type co...
The last decade has witnessed the breakthrough of deep neural networks (DNNs) in many fields. With t...
Neural networks are increasingly being used as components in safety-critical applications, for insta...
The development of deep learning has led to a dramatic increase in the number of applications of art...
In this paper, we presented a self-tuning control algorithm based on a three layers perceptron type ...
The use of low numerical precision is a fundamental optimization included in modern accelerators for...
Hardware accelerators for Deep Neural Networks (DNNs) that use reduced precision parameters are more...
A recurring problem faced when training neural networks is that there is typically not enough data t...
International audienceThe ever-growing cost of both training and inference for state-of-the-art neur...