In many applications, in particular, in econometric application, deep learning techniques are very effective. In this paper, we provide a new explanation for why rectified linear units -- the main units of deep learning -- are so effective. This explanation is similar to the usual explanation of why Gaussian (normal) distributions are ubiquitous -- namely, it is based on an appropriate limit theorem
Successes of deep learning are partly due to appropriate selection of activation function, pooling f...
Deep feedforward neural networks with piecewise linear activations are currently producing the state...
Abstract. In this paper we propose and investigate a novel nonlinear unit, called Lp unit, for deep ...
At present, the most efficient machine learning techniques are deep neural networks. In these networ...
At present, the most efficient machine learning techniques is deep learning, with neurons using Rect...
Traditionally, neural networks used a sigmoid activation function. Recently, it turned out that piec...
© 2017, Springer Science+Business Media, LLC. We show how the success of deep learning could depend ...
Several decades ago, traditional neural networks were the most efficient machine learning technique....
The remarkable practical success of deep learning has revealed some major surprises from a theoretic...
The increasing computational power and the availability of data have made it possible to train ever-...
Understanding the effect of depth in deep learning is a critical problem. In this work, we utilize t...
This paper argues that a notion of statistical explanation, based on Salmon's statistical relevance ...
Rectified linear units (ReLUs) have become the main model for the neural units in current deep learn...
Rectified linear activation units are important components for state-of-the-art deep convolutional n...
By applying concepts from the statistical physics of learning, we study layered neural networks of r...
Successes of deep learning are partly due to appropriate selection of activation function, pooling f...
Deep feedforward neural networks with piecewise linear activations are currently producing the state...
Abstract. In this paper we propose and investigate a novel nonlinear unit, called Lp unit, for deep ...
At present, the most efficient machine learning techniques are deep neural networks. In these networ...
At present, the most efficient machine learning techniques is deep learning, with neurons using Rect...
Traditionally, neural networks used a sigmoid activation function. Recently, it turned out that piec...
© 2017, Springer Science+Business Media, LLC. We show how the success of deep learning could depend ...
Several decades ago, traditional neural networks were the most efficient machine learning technique....
The remarkable practical success of deep learning has revealed some major surprises from a theoretic...
The increasing computational power and the availability of data have made it possible to train ever-...
Understanding the effect of depth in deep learning is a critical problem. In this work, we utilize t...
This paper argues that a notion of statistical explanation, based on Salmon's statistical relevance ...
Rectified linear units (ReLUs) have become the main model for the neural units in current deep learn...
Rectified linear activation units are important components for state-of-the-art deep convolutional n...
By applying concepts from the statistical physics of learning, we study layered neural networks of r...
Successes of deep learning are partly due to appropriate selection of activation function, pooling f...
Deep feedforward neural networks with piecewise linear activations are currently producing the state...
Abstract. In this paper we propose and investigate a novel nonlinear unit, called Lp unit, for deep ...