A Random Vector Functional Link (RVFL) network is a depth-2 neural network with random inner weights and biases. As only the outer weights of such architectures need to be learned, the learning process boils down to a linear optimization task, allowing one to sidestep the pitfalls of nonconvex optimization problems. In this paper, we prove that an RVFL with ReLU activation functions can approximate Lipschitz continuous functions provided its hidden layer is exponentially wide in the input dimension. Although it has been established before that such approximation can be achieved in $L_2$ sense, we prove it for $L_\infty$ approximation error and Gaussian inner weights. To the best of our knowledge, our result is the first of this kind. We giv...
© 2019 Elsevier Ltd With the direct input–output connections, a random vector functional link (RVFL)...
Abstract—It has been known for some years that the uniform-density problem for forward neural networ...
In this paper we provide explicit upper bounds on some distances between the (law of the) output of ...
A Random Vector Functional Link (RVFL) network is a depth-2 neural network with random inner weights...
The learning speed of feed-forward neural networks is notoriously slow and has presented a bottlenec...
The learning speed of feed-forward neural networks is notoriously slow and has presented a bottlenec...
AbstractApproximation properties of the MLP (multilayer feedforward perceptron) model of neural netw...
We contribute to a better understanding of the class of functions that can be represented by a neura...
Random networks of nonlinear functions have a long history of empirical success in function fitting ...
A random net is a shallow neural network where the hidden layer is frozen with random assignment and...
We propose semi-random features for nonlinear function approximation. The flexibility of semi-random...
Deep learning has been extremely successful in recent years. However, it should be noted that neural...
In this note, we study how neural networks with a single hidden layer and ReLU activation interpolat...
Traditionally, random vector functional link (RVFL) is a randomization based neural networks has be...
This paper discusses the function approximation properties of the \u27Gelenbe\u27 random neural netw...
© 2019 Elsevier Ltd With the direct input–output connections, a random vector functional link (RVFL)...
Abstract—It has been known for some years that the uniform-density problem for forward neural networ...
In this paper we provide explicit upper bounds on some distances between the (law of the) output of ...
A Random Vector Functional Link (RVFL) network is a depth-2 neural network with random inner weights...
The learning speed of feed-forward neural networks is notoriously slow and has presented a bottlenec...
The learning speed of feed-forward neural networks is notoriously slow and has presented a bottlenec...
AbstractApproximation properties of the MLP (multilayer feedforward perceptron) model of neural netw...
We contribute to a better understanding of the class of functions that can be represented by a neura...
Random networks of nonlinear functions have a long history of empirical success in function fitting ...
A random net is a shallow neural network where the hidden layer is frozen with random assignment and...
We propose semi-random features for nonlinear function approximation. The flexibility of semi-random...
Deep learning has been extremely successful in recent years. However, it should be noted that neural...
In this note, we study how neural networks with a single hidden layer and ReLU activation interpolat...
Traditionally, random vector functional link (RVFL) is a randomization based neural networks has be...
This paper discusses the function approximation properties of the \u27Gelenbe\u27 random neural netw...
© 2019 Elsevier Ltd With the direct input–output connections, a random vector functional link (RVFL)...
Abstract—It has been known for some years that the uniform-density problem for forward neural networ...
In this paper we provide explicit upper bounds on some distances between the (law of the) output of ...