In this paper, we present a case study on approximate multipliers for MNIST Convolutional Neural Network (CNN). We apply approximate multipliers with different bit-width to the convolution layer in MNIST CNN, evaluate the accuracy of MNIST classification, and analyze the trade-off between approximate multiplier’s area, critical path delay and the accuracy. Based on the results of the evaluation and analysis, we propose a design methodology for approximate multipliers. The approximate multipliers consist of some partial products, which are carefully selected according to the CNN input. With this methodology, we further reduce the area and the delay of the multipliers with keeping high accuracy of the MNIST classification
Many error resilient applications can be approximated using multi-layer perceptrons (MLPs) with insi...
Approximate computing is a promising approach for reducing power consumption and design complexity i...
In recent years there has been a growing interest in hardware neural networks, which express many be...
This paper proposes a low-cost approximate dynamic ranged multiplier and describes its use during th...
The breakthroughs in multi-layer convolutional neural networks (CNNs) have caused significant progre...
This paper presents a low‐cost two‐stage approximate multiplier for bfloat16 (brain floating‐point) ...
This article analyzes the effects of approximate multiplication when performing inferences on deep c...
V magistrski nalogi realiziramo nevronsko mrežo večslojni perceptron in štiri različne učne algoritm...
Convolutional Neural Networks (CNNs) are hierarchical biologically-inspired models that may be taugh...
Stochastic computing (SC) allows for extremely low cost and low power implementations of common arit...
Previous studies have demonstrated that, up to a certain degree, Convolutional Neural Networks (CNNs...
In this work, a deterministic sequence suitable for approximate computing on stochastic computing ha...
This thesis provides an introduction to classical and convolutional neural networks. It describes ho...
We propose an optimization method for the automatic design of approximate multipliers, which minimiz...
In the last five years, deep learning methods and particularly Convolutional Neural Networks (CNNs) ...
Many error resilient applications can be approximated using multi-layer perceptrons (MLPs) with insi...
Approximate computing is a promising approach for reducing power consumption and design complexity i...
In recent years there has been a growing interest in hardware neural networks, which express many be...
This paper proposes a low-cost approximate dynamic ranged multiplier and describes its use during th...
The breakthroughs in multi-layer convolutional neural networks (CNNs) have caused significant progre...
This paper presents a low‐cost two‐stage approximate multiplier for bfloat16 (brain floating‐point) ...
This article analyzes the effects of approximate multiplication when performing inferences on deep c...
V magistrski nalogi realiziramo nevronsko mrežo večslojni perceptron in štiri različne učne algoritm...
Convolutional Neural Networks (CNNs) are hierarchical biologically-inspired models that may be taugh...
Stochastic computing (SC) allows for extremely low cost and low power implementations of common arit...
Previous studies have demonstrated that, up to a certain degree, Convolutional Neural Networks (CNNs...
In this work, a deterministic sequence suitable for approximate computing on stochastic computing ha...
This thesis provides an introduction to classical and convolutional neural networks. It describes ho...
We propose an optimization method for the automatic design of approximate multipliers, which minimiz...
In the last five years, deep learning methods and particularly Convolutional Neural Networks (CNNs) ...
Many error resilient applications can be approximated using multi-layer perceptrons (MLPs) with insi...
Approximate computing is a promising approach for reducing power consumption and design complexity i...
In recent years there has been a growing interest in hardware neural networks, which express many be...