The present work investigates the significance of arithmetic precision in neural network simulation. Noting that a biological brain consists of a large number of cells of low precision, we try to answer the question: With a fixed size of memory and CPU cycles available for simulation, does a larger sized net with less precision perform better than smaller sized one with higher precision? We evaluate the merits and demerits of using low precision integer arithmetic in simulating backpropagation networks. Two identical backpropagation simulators, ibp and fbp, were constructed on Mac II, ibp with 16 bits integer representations of network parameters such as activation values, back-errors, and weights; and fbp with 96 bits floating point repres...
There is presently great interest in the abilities of neural networks to mimic "qualitative rea...
An important aspect of modern automation is machine learning. Specifically, neural networks are used...
We study the training of deep neural networks by gradient descent where floating-point arithmetic is...
The acclaimed successes of neural networks often overshadow their tremendous complexity. We focus on...
The effects of silicon implementation on the backpropagation learning rule in artificial neural syst...
In this paper we solve the problem: how to determine maximal allowable errors, possible for signals ...
In this paper we solve the problem: how to determine maximal allowable errors, possible for signals ...
The ever-increasing computational complexity of deep learning models makes their training and deploy...
: This work describes the functional architecture models of Back-Propagation (BP) algorithm for Mult...
Neural Network is a computational paradigm that comprises several disciplines such as mathematics, ...
Several hardware companies are proposing native Brain Float 16-bit (BF16) support for neural network...
Neural networks can learn complex functions, but they often have troubles with extrapolating even si...
Preprint submitted to Elsevier. It has not been certified by peer review.Reduced precision number fo...
The first successful implementation of Artificial Neural Networks (ANNs) was published a little over...
This paper deals with the computational aspects of neural networks. Specifically, it is suggested th...
There is presently great interest in the abilities of neural networks to mimic "qualitative rea...
An important aspect of modern automation is machine learning. Specifically, neural networks are used...
We study the training of deep neural networks by gradient descent where floating-point arithmetic is...
The acclaimed successes of neural networks often overshadow their tremendous complexity. We focus on...
The effects of silicon implementation on the backpropagation learning rule in artificial neural syst...
In this paper we solve the problem: how to determine maximal allowable errors, possible for signals ...
In this paper we solve the problem: how to determine maximal allowable errors, possible for signals ...
The ever-increasing computational complexity of deep learning models makes their training and deploy...
: This work describes the functional architecture models of Back-Propagation (BP) algorithm for Mult...
Neural Network is a computational paradigm that comprises several disciplines such as mathematics, ...
Several hardware companies are proposing native Brain Float 16-bit (BF16) support for neural network...
Neural networks can learn complex functions, but they often have troubles with extrapolating even si...
Preprint submitted to Elsevier. It has not been certified by peer review.Reduced precision number fo...
The first successful implementation of Artificial Neural Networks (ANNs) was published a little over...
This paper deals with the computational aspects of neural networks. Specifically, it is suggested th...
There is presently great interest in the abilities of neural networks to mimic "qualitative rea...
An important aspect of modern automation is machine learning. Specifically, neural networks are used...
We study the training of deep neural networks by gradient descent where floating-point arithmetic is...