We study the problem of approximating compactly-supported integrable functions while implementing their support set using feedforward neural networks. Our first main result transcribes this "structured" approximation problem into a universality problem. We do this by constructing a refinement of the usual topology on the space $L^1_{\operatorname{loc}}(\mathbb{R}^d,\mathbb{R}^D)$ of locally-integrable functions in which compactly-supported functions can only be approximated in $L^1$-norm by functions with matching discretized support. We establish the universality of ReLU feedforward networks with bilinear pooling layers in this refined topology. Consequentially, we find that ReLU feedforward networks with bilinear pooling can approximate c...
This paper develops simple feed-forward neural networks that achieve the universal approximation pro...
It is well-known that the parameterized family of functions representable by fully-connected feedfor...
The learning speed of feed-forward neural networks is notoriously slow and has presented a bottlenec...
This paper explores the expressive power of deep neural networks through the framework of function c...
We define a neural network in infinite dimensional spaces for which we can show the universal approx...
We contribute to a better understanding of the class of functions that can be represented by a neura...
This paper focuses on establishing $L^2$ approximation properties for deep ReLU convolutional neural...
Neural networks with the Rectified Linear Unit (ReLU) nonlinearity are described by a vector of para...
We contribute to a better understanding of the class of functions that is represented by a neural ne...
Recently there has been much interest in understanding why deep neural networks are preferred to sha...
The universal approximation theorem is generalised to uniform convergence on the (noncompact) input ...
We solve an open question from Lu et al. (2017), by showing that any target network with inputs in $...
Several researchers characterized the activation function under which multilayer feedforward network...
Deep neural networks, as a powerful system to represent high dimensional complex functions, play a k...
This paper concerns the universality of the two-layer neural network with the $k$-rectified linear u...
This paper develops simple feed-forward neural networks that achieve the universal approximation pro...
It is well-known that the parameterized family of functions representable by fully-connected feedfor...
The learning speed of feed-forward neural networks is notoriously slow and has presented a bottlenec...
This paper explores the expressive power of deep neural networks through the framework of function c...
We define a neural network in infinite dimensional spaces for which we can show the universal approx...
We contribute to a better understanding of the class of functions that can be represented by a neura...
This paper focuses on establishing $L^2$ approximation properties for deep ReLU convolutional neural...
Neural networks with the Rectified Linear Unit (ReLU) nonlinearity are described by a vector of para...
We contribute to a better understanding of the class of functions that is represented by a neural ne...
Recently there has been much interest in understanding why deep neural networks are preferred to sha...
The universal approximation theorem is generalised to uniform convergence on the (noncompact) input ...
We solve an open question from Lu et al. (2017), by showing that any target network with inputs in $...
Several researchers characterized the activation function under which multilayer feedforward network...
Deep neural networks, as a powerful system to represent high dimensional complex functions, play a k...
This paper concerns the universality of the two-layer neural network with the $k$-rectified linear u...
This paper develops simple feed-forward neural networks that achieve the universal approximation pro...
It is well-known that the parameterized family of functions representable by fully-connected feedfor...
The learning speed of feed-forward neural networks is notoriously slow and has presented a bottlenec...