This paper proposes a low-cost approximate dynamic ranged multiplier and describes its use during the training process on convolutional neural networks (CNNs). It has been noted that the approximate multiplier can be used in the convolution of CNN’s forward path. However, in CNN inference on a post-training quantization with a pre-trained model, erroneous convolution output from highly approximate multipliers significantly degrades performance. On the other hand, with the CNN model based on an approximate multiplier, the approximation-aware training process can optimize its learnable parameters, producing better classification results considering the approximate hardware. We analyze the error distribution of the approximate dynamic r...
Recently, deep learning is at the forefront of the state-of-the-art machine learning algorithms and ...
This paper aims to accelerate the test-time computation of deep convolutional neural networks (CNNs)...
The growth in size and complexity of convolutional neural networks (CNNs) is forcing the partitionin...
This article analyzes the effects of approximate multiplication when performing inferences on deep c...
The breakthroughs in multi-layer convolutional neural networks (CNNs) have caused significant progre...
This paper presents a low‐cost two‐stage approximate multiplier for bfloat16 (brain floating‐point) ...
In this paper, we present a case study on approximate multipliers for MNIST Convolutional Neural Net...
Previous studies have demonstrated that, up to a certain degree, Convolutional Neural Networks (CNNs...
This thesis provides an introduction to classical and convolutional neural networks. It describes ho...
In this work, a deterministic sequence suitable for approximate computing on stochastic computing ha...
Today, computer vision (CV) problems are solved with unprecedented accuracy using convolutional neur...
Convolutional neural networks achieve impressive results for image recognition tasks, but are often ...
Stochastic computing (SC) allows for extremely low cost and low power implementations of common arit...
Convolutional neural networks (CNNs) are currently state-of-the-art for various classification tasks...
Binary Convolutional Neural Networks (CNNs) can significantly reduce the number of arithmetic operat...
Recently, deep learning is at the forefront of the state-of-the-art machine learning algorithms and ...
This paper aims to accelerate the test-time computation of deep convolutional neural networks (CNNs)...
The growth in size and complexity of convolutional neural networks (CNNs) is forcing the partitionin...
This article analyzes the effects of approximate multiplication when performing inferences on deep c...
The breakthroughs in multi-layer convolutional neural networks (CNNs) have caused significant progre...
This paper presents a low‐cost two‐stage approximate multiplier for bfloat16 (brain floating‐point) ...
In this paper, we present a case study on approximate multipliers for MNIST Convolutional Neural Net...
Previous studies have demonstrated that, up to a certain degree, Convolutional Neural Networks (CNNs...
This thesis provides an introduction to classical and convolutional neural networks. It describes ho...
In this work, a deterministic sequence suitable for approximate computing on stochastic computing ha...
Today, computer vision (CV) problems are solved with unprecedented accuracy using convolutional neur...
Convolutional neural networks achieve impressive results for image recognition tasks, but are often ...
Stochastic computing (SC) allows for extremely low cost and low power implementations of common arit...
Convolutional neural networks (CNNs) are currently state-of-the-art for various classification tasks...
Binary Convolutional Neural Networks (CNNs) can significantly reduce the number of arithmetic operat...
Recently, deep learning is at the forefront of the state-of-the-art machine learning algorithms and ...
This paper aims to accelerate the test-time computation of deep convolutional neural networks (CNNs)...
The growth in size and complexity of convolutional neural networks (CNNs) is forcing the partitionin...