As the use of Artificial Neural Network (ANN) in mobile embedded devices gets more pervasive, power consumption of ANN hardware is becoming a major limiting factor. Although considerable research efforts are now directed towards low-power implementations of ANN, the issue of dynamic power scalability of the implemented design has been largely overlooked. In this paper, we discuss the motivation and basic principles for implementing power scaling in ANN Hardware. With the help of a simple example, we demonstrate how power scaling can be achieved with dynamic pruning techniques
Machine learning has achieved great success in recent years, especially the deep learning algorithms...
There is an urgent need for compact, fast, and power-efficient hardware implementations of state-of-...
Deep Neural Networks (DNN) have reached an outstanding accuracy in the past years, often going beyon...
Recent research into Artificial Neural Networks (ANN) has highlighted the potential of using compact...
This paper addresses the problem of accelerating large artificial neural networks (ANN), whose topol...
This paper investigates the energy savings that near-subthreshold processors can obtain in edge AI a...
Experience shows that cooperating and communicating computing systems, comprising segregated single ...
With the explosion of AI in recent years, there has been an exponential rise in the demand for compu...
Deep neural networks virtually dominate the domain of most modern vision systems, providing high per...
In this paper, we present a flexible, simple and accurate power modeling technique that can be used ...
The continued success of Deep Neural Networks (DNNs) in classification tasks has sparked a trend of ...
In this article, we present a new, simple, accurate, and fast power estimation technique that can be...
Large Deep Neural Networks (DNNs) are the backbone of today's artificial intelligence due to their a...
The development of deep learning has led to a dramatic increase in the number of applications of art...
International audienceIn this paper, we present a new, simple, accurate and fast power estimation te...
Machine learning has achieved great success in recent years, especially the deep learning algorithms...
There is an urgent need for compact, fast, and power-efficient hardware implementations of state-of-...
Deep Neural Networks (DNN) have reached an outstanding accuracy in the past years, often going beyon...
Recent research into Artificial Neural Networks (ANN) has highlighted the potential of using compact...
This paper addresses the problem of accelerating large artificial neural networks (ANN), whose topol...
This paper investigates the energy savings that near-subthreshold processors can obtain in edge AI a...
Experience shows that cooperating and communicating computing systems, comprising segregated single ...
With the explosion of AI in recent years, there has been an exponential rise in the demand for compu...
Deep neural networks virtually dominate the domain of most modern vision systems, providing high per...
In this paper, we present a flexible, simple and accurate power modeling technique that can be used ...
The continued success of Deep Neural Networks (DNNs) in classification tasks has sparked a trend of ...
In this article, we present a new, simple, accurate, and fast power estimation technique that can be...
Large Deep Neural Networks (DNNs) are the backbone of today's artificial intelligence due to their a...
The development of deep learning has led to a dramatic increase in the number of applications of art...
International audienceIn this paper, we present a new, simple, accurate and fast power estimation te...
Machine learning has achieved great success in recent years, especially the deep learning algorithms...
There is an urgent need for compact, fast, and power-efficient hardware implementations of state-of-...
Deep Neural Networks (DNN) have reached an outstanding accuracy in the past years, often going beyon...