This paper investigates about the possibility to reduce power consumption in Neural Network using approximated computing techniques. Authors compare a traditional fixed-point neuron with an approximated neuron composed of approximated multipliers and adder. Experiments show that in the proposed case of study (a wine classifier) the approximated neuron allows to save up to the 43% of the area, a power consumption saving of 35% and an improvement in the maximum clock frequency of 20%
A new design approach, called approximate computing (AxC), leverages the flexibility provided by int...
International audienceA new design paradigm, Approximate Computing (AxC), has been established to in...
Smart Systems applications often include error resilient computations, due to the presence of noisy ...
This paper investigates about the possibility to reduce power consumption in Neural Network using ap...
Approximate computation is a new trend that explores and harnesses trade-offs between the precision ...
Abstract—The cessation of Moore’s Law has limited further improvements in power efficiency. In recen...
Embedding Machine Learning enables integrating intelligence in recent application domains such as In...
Many error resilient applications can be approximated using multi-layer perceptrons (MLPs) with insi...
© 2016 IEEE. Recently convolutional neural networks (ConvNets) have come up as state-of-the-art clas...
Computation accuracy can be adequately tuned on the specific application requirements in order to re...
This paper investigates the energy savings that near-subthreshold processors can obtain in edge AI a...
Neural networks (NN), one type of machine learning (ML) algorithms, have emerged as a powerful parad...
The reduced benefits offered by technology scaling in the nanoscale era call for innovative design a...
This paper discusses some of the limitations of hardware implementations of neural networks. The aut...
Abstract — Approximate computing has recently emerged as a promising approach to energy-efficient de...
A new design approach, called approximate computing (AxC), leverages the flexibility provided by int...
International audienceA new design paradigm, Approximate Computing (AxC), has been established to in...
Smart Systems applications often include error resilient computations, due to the presence of noisy ...
This paper investigates about the possibility to reduce power consumption in Neural Network using ap...
Approximate computation is a new trend that explores and harnesses trade-offs between the precision ...
Abstract—The cessation of Moore’s Law has limited further improvements in power efficiency. In recen...
Embedding Machine Learning enables integrating intelligence in recent application domains such as In...
Many error resilient applications can be approximated using multi-layer perceptrons (MLPs) with insi...
© 2016 IEEE. Recently convolutional neural networks (ConvNets) have come up as state-of-the-art clas...
Computation accuracy can be adequately tuned on the specific application requirements in order to re...
This paper investigates the energy savings that near-subthreshold processors can obtain in edge AI a...
Neural networks (NN), one type of machine learning (ML) algorithms, have emerged as a powerful parad...
The reduced benefits offered by technology scaling in the nanoscale era call for innovative design a...
This paper discusses some of the limitations of hardware implementations of neural networks. The aut...
Abstract — Approximate computing has recently emerged as a promising approach to energy-efficient de...
A new design approach, called approximate computing (AxC), leverages the flexibility provided by int...
International audienceA new design paradigm, Approximate Computing (AxC), has been established to in...
Smart Systems applications often include error resilient computations, due to the presence of noisy ...