The Posit Number System was introduced in 2017 as a replacement for floating-point numbers. Since then, the community has explored its application in Neural Network related tasks and produced some unit designs which are still far from being competitive with their floating-point counterparts. This paper proposes a Posit Logarithm-Approximate Multiplication (PLAM) scheme to significantly reduce the complexity of posit multipliers, the most power-hungry units within Deep Neural Network architectures. When comparing with state-of-the-art posit multipliers, experiments show that the proposed technique reduces the area, power, and delay of hardware multipliers up to 72.86%, 81.79%, and 17.01%, respectively, without accuracy degradation
The newly proposed posit number format uses a significantly different approach to represent floating...
Due to their potential to reduce silicon area or boost throughput, low-precision computations were w...
Many error resilient applications can be approximated using multi-layer perceptrons (MLPs) with insi...
The IEEE 754 Standard for Floating-Point Arithmetic has been for decades imple mented in the vast ma...
Posit™ arithmetic is a recent alternative format to the IEEE 754 standard for floating-point numbers...
We propose an optimization method for the automatic design of approximate multipliers, which minimiz...
With increasing real-time constraints being put on the use of Deep Neural Networks (DNNs) by real-ti...
The high computational complexity, memory footprints, and energy requirements of machine learning mo...
The recent advances in machine learning, in general, and Artificial Neural Networks (ANN), in partic...
The demand for higher precision arithmetic is increasing due to the rapid development of new computi...
Real-time processing of images and videos is becoming considerably crucial in modern applications of...
Nowadays, two groundbreaking factors are emerging in neural networks. Firstly, there is the RISC-V o...
Modern computational tasks are often required to not only guarantee predefined accuracy, but get the...
The need to support various machine learning (ML) algorithms on energy-constrained computing devices...
In recent years there has been a growing interest in hardware neural networks, which express many be...
The newly proposed posit number format uses a significantly different approach to represent floating...
Due to their potential to reduce silicon area or boost throughput, low-precision computations were w...
Many error resilient applications can be approximated using multi-layer perceptrons (MLPs) with insi...
The IEEE 754 Standard for Floating-Point Arithmetic has been for decades imple mented in the vast ma...
Posit™ arithmetic is a recent alternative format to the IEEE 754 standard for floating-point numbers...
We propose an optimization method for the automatic design of approximate multipliers, which minimiz...
With increasing real-time constraints being put on the use of Deep Neural Networks (DNNs) by real-ti...
The high computational complexity, memory footprints, and energy requirements of machine learning mo...
The recent advances in machine learning, in general, and Artificial Neural Networks (ANN), in partic...
The demand for higher precision arithmetic is increasing due to the rapid development of new computi...
Real-time processing of images and videos is becoming considerably crucial in modern applications of...
Nowadays, two groundbreaking factors are emerging in neural networks. Firstly, there is the RISC-V o...
Modern computational tasks are often required to not only guarantee predefined accuracy, but get the...
The need to support various machine learning (ML) algorithms on energy-constrained computing devices...
In recent years there has been a growing interest in hardware neural networks, which express many be...
The newly proposed posit number format uses a significantly different approach to represent floating...
Due to their potential to reduce silicon area or boost throughput, low-precision computations were w...
Many error resilient applications can be approximated using multi-layer perceptrons (MLPs) with insi...