We propose an optimization method for the automatic design of approximate multipliers, which minimizes the average error according to the operand distributions. Our multiplier achieves up to 50.24% higher accuracy than the best reproduced approximate multiplier in DNNs, with 15.76% smaller area, 25.05% less power consumption, and 3.50% shorter delay. Compared with an exact multiplier, our multiplier reduces the area, power consumption, and delay by 44.94%, 47.63%, and 16.78%, respectively, with negligible accuracy losses. The tested DNN accelerator modules with our multiplier obtain up to 18.70% smaller area and 9.99% less power consumption than the original modules
There is an urgent need for compact, fast, and power-efficient hardware implementations of state-of-...
Due to limited size, cost and power, embedded devices do not offer the same computational throughput...
The latest Deep Learning (DL) methods for designing Deep Neural Networks (DNN) have significantly ex...
Many error resilient applications can be approximated using multi-layer perceptrons (MLPs) with insi...
The continued success of Deep Neural Networks (DNNs) in classification tasks has sparked a trend of ...
Deep learning is a rising topic at the edge of technology, with applications in many areas of our li...
Edge training of Deep Neural Networks (DNNs) is a desirable goal for continuous learning; however, i...
Efficient implementation of deep neural networks (DNNs) on CPU-based systems is critical owing to th...
This paper introduces an energy-efficient design method for Deep Neural Network (DNN) accelerator. A...
The need to support various machine learning (ML) algorithms on energy-constrained computing devices...
In recent years there has been a growing interest in hardware neural networks, which express many be...
Over the past decade, the rapid development of deep learning (DL) algorithms has enabled extraordina...
© 2017 IEEE. Deep neural networks (DNNs) are currently widely used for many artificial intelligence ...
The Posit Number System was introduced in 2017 as a replacement for floating-point numbers. Since th...
130 pagesOver the past decade, machine learning (ML) with deep neural networks (DNNs) has become ext...
There is an urgent need for compact, fast, and power-efficient hardware implementations of state-of-...
Due to limited size, cost and power, embedded devices do not offer the same computational throughput...
The latest Deep Learning (DL) methods for designing Deep Neural Networks (DNN) have significantly ex...
Many error resilient applications can be approximated using multi-layer perceptrons (MLPs) with insi...
The continued success of Deep Neural Networks (DNNs) in classification tasks has sparked a trend of ...
Deep learning is a rising topic at the edge of technology, with applications in many areas of our li...
Edge training of Deep Neural Networks (DNNs) is a desirable goal for continuous learning; however, i...
Efficient implementation of deep neural networks (DNNs) on CPU-based systems is critical owing to th...
This paper introduces an energy-efficient design method for Deep Neural Network (DNN) accelerator. A...
The need to support various machine learning (ML) algorithms on energy-constrained computing devices...
In recent years there has been a growing interest in hardware neural networks, which express many be...
Over the past decade, the rapid development of deep learning (DL) algorithms has enabled extraordina...
© 2017 IEEE. Deep neural networks (DNNs) are currently widely used for many artificial intelligence ...
The Posit Number System was introduced in 2017 as a replacement for floating-point numbers. Since th...
130 pagesOver the past decade, machine learning (ML) with deep neural networks (DNNs) has become ext...
There is an urgent need for compact, fast, and power-efficient hardware implementations of state-of-...
Due to limited size, cost and power, embedded devices do not offer the same computational throughput...
The latest Deep Learning (DL) methods for designing Deep Neural Networks (DNN) have significantly ex...