Neural network simulations often spend a large proportion of their time computing exponential functions. Since the exponentiation routines of typical math libraries are rather slow, their replacement with a fast approximation can greatly reduce the overall computation time. This paper describes how exponentiation can be approximated by manipulating the components of a standard (IEEE-754) floating-point representation. This models the exponential function as well as a lookup table with linear interpolation, but is significantly faster and more compact
This is a small code that helps test the function approximation by dense neural networks. The co...
A simulation framework for artificial neural network models and electronic implementations (CMOS) is...
This paper applies a recently developed neural network called plausible neural network (PNN) to func...
... This article describes how exponentiation can be approximated by manipulating the components of ...
It is expensive to simulate large-scale neural networks on hardware while ensuring a high resemblanc...
Abstract—It has been known for some years that the uniform-density problem for forward neural networ...
Abstract: To reduce random access memory (RAM) requirements and to increase speed of recognition alg...
Abstract:- Function approximation, which finds the underlying relationship from a given finite input...
Approximate computing has emerged as a promising approach to energy-efficient design of digital syst...
As Machine Learning applications increase the demand for optimised implementations in both embedded ...
International audienceThe high performance and capacity of current FPGAs makes them suitable as acce...
Three fundamental representation schemes for numbers in a digital neural network are explored: the f...
The learning of neural networks is becoming more and more important. Researchers have constructed do...
Abstract—Interval arithmetic has become a popular tool for general optimization problems such as rob...
Abstract. In this paper, a new algorithm for function approximation is proposed to obtain better gen...
This is a small code that helps test the function approximation by dense neural networks. The co...
A simulation framework for artificial neural network models and electronic implementations (CMOS) is...
This paper applies a recently developed neural network called plausible neural network (PNN) to func...
... This article describes how exponentiation can be approximated by manipulating the components of ...
It is expensive to simulate large-scale neural networks on hardware while ensuring a high resemblanc...
Abstract—It has been known for some years that the uniform-density problem for forward neural networ...
Abstract: To reduce random access memory (RAM) requirements and to increase speed of recognition alg...
Abstract:- Function approximation, which finds the underlying relationship from a given finite input...
Approximate computing has emerged as a promising approach to energy-efficient design of digital syst...
As Machine Learning applications increase the demand for optimised implementations in both embedded ...
International audienceThe high performance and capacity of current FPGAs makes them suitable as acce...
Three fundamental representation schemes for numbers in a digital neural network are explored: the f...
The learning of neural networks is becoming more and more important. Researchers have constructed do...
Abstract—Interval arithmetic has become a popular tool for general optimization problems such as rob...
Abstract. In this paper, a new algorithm for function approximation is proposed to obtain better gen...
This is a small code that helps test the function approximation by dense neural networks. The co...
A simulation framework for artificial neural network models and electronic implementations (CMOS) is...
This paper applies a recently developed neural network called plausible neural network (PNN) to func...