Abstract: To reduce random access memory (RAM) requirements and to increase speed of recognition algorithms we consider a weight discretization problem for trained neural networks. We show that an exponential discretization is preferable to a linear discretization since it allows one to achieve the same accuracy when the number of bits is 1 or 2 less. The quality of the neural network VGG-16 is already satisfactory (top5 accuracy 69%) in the case of 3 bit exponential discretization. The ResNet50 neural network shows top5 accuracy 84% at 4 bits. Other neural networks perform fairly well at 5 bits (top5 accuracies of Xception, Inception-v3, and MobileNet-v2 top5 were 87%, 90%, and 77%, respectively). At less number of bits, the accuracy decre...
Multilayer feedforward neural nets with integer weights can be used to approximate the response of t...
The authors introduce a restricted model of a neuron which is more practical as a model of computati...
. A perceptron is trained by a random bit sequence. In comparison to the corresponding classificatio...
Abstract—: In this article, we develop method of linear and exponential quantization of neural netwo...
In this work we present neural network train-ing algorithms, which are based on the differ-ential ev...
The conventional multilayer feedforward network having continuous-weights is expensive to implement...
This thesis is concerned with a numerical approximation technique for feedforward artificial neural ...
Abstract—It has been known for some years that the uniform-density problem for forward neural networ...
The application of programmable devices to implement neural networks requires efficient training alg...
Abstract—This paper investigates how to reduce error and increase speed of Back propagation ANN by c...
Previously, we have introduced the idea of neural network transfer, where learning on a target prob...
Artificial Neural Networks (ANN) are able to simplify recognition tasks and have been steadily impro...
In recent years, multilayer feedforward neural networks (NN) have been shown to be very effective to...
The authors introduce a restricted model of a neuron which is more practical as a model of computati...
Abstract. It is shown that high-order feedforward neural nets of constant depth with piecewise-polyn...
Multilayer feedforward neural nets with integer weights can be used to approximate the response of t...
The authors introduce a restricted model of a neuron which is more practical as a model of computati...
. A perceptron is trained by a random bit sequence. In comparison to the corresponding classificatio...
Abstract—: In this article, we develop method of linear and exponential quantization of neural netwo...
In this work we present neural network train-ing algorithms, which are based on the differ-ential ev...
The conventional multilayer feedforward network having continuous-weights is expensive to implement...
This thesis is concerned with a numerical approximation technique for feedforward artificial neural ...
Abstract—It has been known for some years that the uniform-density problem for forward neural networ...
The application of programmable devices to implement neural networks requires efficient training alg...
Abstract—This paper investigates how to reduce error and increase speed of Back propagation ANN by c...
Previously, we have introduced the idea of neural network transfer, where learning on a target prob...
Artificial Neural Networks (ANN) are able to simplify recognition tasks and have been steadily impro...
In recent years, multilayer feedforward neural networks (NN) have been shown to be very effective to...
The authors introduce a restricted model of a neuron which is more practical as a model of computati...
Abstract. It is shown that high-order feedforward neural nets of constant depth with piecewise-polyn...
Multilayer feedforward neural nets with integer weights can be used to approximate the response of t...
The authors introduce a restricted model of a neuron which is more practical as a model of computati...
. A perceptron is trained by a random bit sequence. In comparison to the corresponding classificatio...