Deep learning networks with convolution, pooling and subsampling are a special case of hierar- chical architectures, which can be represented by trees (such as binary trees). Hierarchical as well as shallow networks can approximate functions of several variables, in particular those that are com- positions of low dimensional functions. We show that the power of a deep network architecture with respect to a shallow network is rather independent of the specific nonlinear operations in the network and depends instead on the the behavior of the VC-dimension. A shallow network can approximate compositional functions with the same error of a deep network but at the cost of a VC-dimension that is exponential instead than quadratic in the dimension...
Deep learning has demonstrated unreasonable effectiveness on several high dimensional regression and...
Currently, deep neural networks are the state of the art on problems such as speech recognition and ...
Recently, researchers in the artificial neural network field have focused their attention on connect...
We describe computational tasks - especially in vision - that correspond to compositional/hierarchic...
While the universal approximation property holds both for hierarchical and shallow networks, deep ne...
The paper reviews and extends an emerging body of theoretical results on deep learning including the...
[formerly titled "Why and When Can Deep – but Not Shallow – Networks Avoid the Curse of Dimensionali...
The paper briefly reviews several recent results on hierarchical architectures for learning from exa...
© 2020 American Institute of Mathematical Sciences. All rights reserved. We show that deep networks ...
The paper characterizes classes of functions for which deep learning can be exponentially better tha...
Recently there has been much interest in understanding why deep neural networks are preferred to sha...
The main success stories of deep learning, starting with ImageNet, depend on convolutional networks,...
We investigate the representational power of sum-product networks (computation networks analogous to...
Recently, deep networks were proved to be more effective than shallow architectures to face complex ...
How does a 110-layer ResNet learn a high-complexity classifier using relatively few training example...
Deep learning has demonstrated unreasonable effectiveness on several high dimensional regression and...
Currently, deep neural networks are the state of the art on problems such as speech recognition and ...
Recently, researchers in the artificial neural network field have focused their attention on connect...
We describe computational tasks - especially in vision - that correspond to compositional/hierarchic...
While the universal approximation property holds both for hierarchical and shallow networks, deep ne...
The paper reviews and extends an emerging body of theoretical results on deep learning including the...
[formerly titled "Why and When Can Deep – but Not Shallow – Networks Avoid the Curse of Dimensionali...
The paper briefly reviews several recent results on hierarchical architectures for learning from exa...
© 2020 American Institute of Mathematical Sciences. All rights reserved. We show that deep networks ...
The paper characterizes classes of functions for which deep learning can be exponentially better tha...
Recently there has been much interest in understanding why deep neural networks are preferred to sha...
The main success stories of deep learning, starting with ImageNet, depend on convolutional networks,...
We investigate the representational power of sum-product networks (computation networks analogous to...
Recently, deep networks were proved to be more effective than shallow architectures to face complex ...
How does a 110-layer ResNet learn a high-complexity classifier using relatively few training example...
Deep learning has demonstrated unreasonable effectiveness on several high dimensional regression and...
Currently, deep neural networks are the state of the art on problems such as speech recognition and ...
Recently, researchers in the artificial neural network field have focused their attention on connect...