We study the necessary and sufficient complexity of ReLU neural networks---in terms of depth and number of weights---which is required for approximating classifier functions in L2. As a model class, we consider the set Eβ(Rd) of possibly discontinuous piecewise Cβ functions f:[−1/2,1/2]d→R, where the different smooth regions of f are separated by Cβ hypersurfaces. For dimension d≥2, regularity β>0, and accuracy ε>0, we construct artificial neural networks with ReLU activation function that approximate functions from Eβ(Rd) up to L2 error of ε. The constructed networks have a fixed number of layers, depending only on d and β, and they have O(ε−2(d−1)/β) many nonzero weights, which we prove to be optimal. In addition to the optimality in term...
In this article, we develop a framework for showing that neural networks can overcome the curse of d...
We study the expressive power of deep ReLU neural networks for approximating functions in dilated sh...
This paper develops simple feed-forward neural networks that achieve the universal approximation pro...
We contribute to a better understanding of the class of functions that is represented by a neural ne...
We contribute to a better understanding of the class of functions that can be represented by a neura...
Recently there has been much interest in understanding why deep neural networks are preferred to sha...
© 2016 World Scientific Publishing Company. The paper briefly reviews several recent results on hier...
The first part of this thesis develops fundamental limits of deep neural network learning by charact...
This paper focuses on establishing $L^2$ approximation properties for deep ReLU convolutional neural...
We investigate the efficiency of approximation by linear combinations of ridge func-tions in the met...
Abstract. We prove that neural networks with a single hidden layer are capable of providing an optim...
We consider neural network approximation spaces that classify functions according to the rate at whi...
We propose an optimal architecture for deep neural networks of given size. The optimal architecture ...
We establish in this work approximation results of deep neural networks for smooth functions measure...
We consider general approximation families encompassing ReLU neural networks. On the one hand, we in...
In this article, we develop a framework for showing that neural networks can overcome the curse of d...
We study the expressive power of deep ReLU neural networks for approximating functions in dilated sh...
This paper develops simple feed-forward neural networks that achieve the universal approximation pro...
We contribute to a better understanding of the class of functions that is represented by a neural ne...
We contribute to a better understanding of the class of functions that can be represented by a neura...
Recently there has been much interest in understanding why deep neural networks are preferred to sha...
© 2016 World Scientific Publishing Company. The paper briefly reviews several recent results on hier...
The first part of this thesis develops fundamental limits of deep neural network learning by charact...
This paper focuses on establishing $L^2$ approximation properties for deep ReLU convolutional neural...
We investigate the efficiency of approximation by linear combinations of ridge func-tions in the met...
Abstract. We prove that neural networks with a single hidden layer are capable of providing an optim...
We consider neural network approximation spaces that classify functions according to the rate at whi...
We propose an optimal architecture for deep neural networks of given size. The optimal architecture ...
We establish in this work approximation results of deep neural networks for smooth functions measure...
We consider general approximation families encompassing ReLU neural networks. On the one hand, we in...
In this article, we develop a framework for showing that neural networks can overcome the curse of d...
We study the expressive power of deep ReLU neural networks for approximating functions in dilated sh...
This paper develops simple feed-forward neural networks that achieve the universal approximation pro...