At present, the most efficient machine learning techniques is deep learning, with neurons using Rectified Linear (ReLU) activation function s(z) = max(0,z), in many cases, the use of Rectified Power (RePU) activation functions (s(z))^p -- for some p -- leads to better results. In this paper, we explain these results by proving that RePU functions (or their leaky versions) are optimal with respect that all reasonable optimality criteria
We propose an optimal architecture for deep neural networks of given size. The optimal architecture ...
Successes of deep learning are partly due to appropriate selection of activation function, pooling f...
While non-linear activation functions play vital roles in artificial neural networks, it is generall...
In many applications, in particular, in econometric application, deep learning techniques are very e...
Most multi-layer neural networks used in deep learning utilize rectified linear neurons. In our prev...
We consider neural networks with rational activation functions. The choice of the nonlinear activati...
Deep neural networks, as a powerful system to represent high dimensional complex functions, play a k...
Traditionally, neural networks used a sigmoid activation function. Recently, it turned out that piec...
At present, the most efficient machine learning techniques are deep neural networks. In these networ...
We contribute to a better understanding of the class of functions that is represented by a neural ne...
MEng (Computer and Electronic Engineering), North-West University, Potchefstroom CampusThe ability o...
Since in the physical world, most dependencies are smooth (differentiable), traditionally, smooth fu...
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical ques...
The activation function plays an important role in training and improving performance in deep neural...
In this article we present new results on neural networks with linear threshold activation functions...
We propose an optimal architecture for deep neural networks of given size. The optimal architecture ...
Successes of deep learning are partly due to appropriate selection of activation function, pooling f...
While non-linear activation functions play vital roles in artificial neural networks, it is generall...
In many applications, in particular, in econometric application, deep learning techniques are very e...
Most multi-layer neural networks used in deep learning utilize rectified linear neurons. In our prev...
We consider neural networks with rational activation functions. The choice of the nonlinear activati...
Deep neural networks, as a powerful system to represent high dimensional complex functions, play a k...
Traditionally, neural networks used a sigmoid activation function. Recently, it turned out that piec...
At present, the most efficient machine learning techniques are deep neural networks. In these networ...
We contribute to a better understanding of the class of functions that is represented by a neural ne...
MEng (Computer and Electronic Engineering), North-West University, Potchefstroom CampusThe ability o...
Since in the physical world, most dependencies are smooth (differentiable), traditionally, smooth fu...
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical ques...
The activation function plays an important role in training and improving performance in deep neural...
In this article we present new results on neural networks with linear threshold activation functions...
We propose an optimal architecture for deep neural networks of given size. The optimal architecture ...
Successes of deep learning are partly due to appropriate selection of activation function, pooling f...
While non-linear activation functions play vital roles in artificial neural networks, it is generall...