© 2019 Neural information processing systems foundation. All rights reserved. We study finite sample expressivity, i.e., memorization power of ReLU networks. Recent results require N hidden nodes to memorize/interpolate arbitrary N data points. In contrast, by exploiting depth, we show that 3-layer ReLU networks with ?(vN) hidden nodes can perfectly memorize most datasets with N points. We also prove that width T(vN) is necessary and sufficient for memorizing N data points, proving tight bounds on memorization capacity. The sufficiency result can be extended to deeper networks; we show that an L-layer network with W parameters in the hidden layers can memorize N data points if W = ?(N). Combined with a recent upper bound O(WLlog W) on VC di...
We present a model of long term memory : learning within irreversible bounds. The best bound values ...
Learning to solve sequential tasks with recurrent models requires the ability to memorize long seque...
A general relationship is developed between the VC-dimension and the statistical lower epsilon-capac...
We contribute to a better understanding of the class of functions that is represented by a neural ne...
Overwhelming theoretical and empirical evidence shows that mildly overparametrized neural networks -...
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical ques...
Rectified linear units (ReLUs) have become the main model for the neural units in current deep learn...
We prove in this paper that optimizing wide ReLU neural networks (NNs) with at least one hidden laye...
We contribute to a better understanding of the class of functions that can be represented by a neura...
By applying concepts from the statistical physics of learning, we study layered neural networks of r...
We consider general approximation families encompassing ReLU neural networks. On the one hand, we in...
Recurrent networks are trained to memorize their input better, often in the hopes that such training...
We can compare the expressiveness of neural networks that use rectified linear units (ReLUs) by the ...
The Neural Tangent Kernel (NTK) has emerged as a powerful tool to provide memorization, optimization...
In artificial neural networks, learning from data is a computationally demanding task in which a lar...
We present a model of long term memory : learning within irreversible bounds. The best bound values ...
Learning to solve sequential tasks with recurrent models requires the ability to memorize long seque...
A general relationship is developed between the VC-dimension and the statistical lower epsilon-capac...
We contribute to a better understanding of the class of functions that is represented by a neural ne...
Overwhelming theoretical and empirical evidence shows that mildly overparametrized neural networks -...
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical ques...
Rectified linear units (ReLUs) have become the main model for the neural units in current deep learn...
We prove in this paper that optimizing wide ReLU neural networks (NNs) with at least one hidden laye...
We contribute to a better understanding of the class of functions that can be represented by a neura...
By applying concepts from the statistical physics of learning, we study layered neural networks of r...
We consider general approximation families encompassing ReLU neural networks. On the one hand, we in...
Recurrent networks are trained to memorize their input better, often in the hopes that such training...
We can compare the expressiveness of neural networks that use rectified linear units (ReLUs) by the ...
The Neural Tangent Kernel (NTK) has emerged as a powerful tool to provide memorization, optimization...
In artificial neural networks, learning from data is a computationally demanding task in which a lar...
We present a model of long term memory : learning within irreversible bounds. The best bound values ...
Learning to solve sequential tasks with recurrent models requires the ability to memorize long seque...
A general relationship is developed between the VC-dimension and the statistical lower epsilon-capac...