The learning speed of feed-forward neural networks is notoriously slow and has presented a bottleneck in deep learning applications for several decades. For instance, gradient-based learning algorithms, which are used extensively to train neural networks, tend to work slowly when all of the network parameters must be iteratively tuned. To counter this, both researchers and practitioners have tried introducing randomness to reduce the learning requirement. Based on the original construction of Igelnik and Pao, single layer neural-networks with random input-to-hidden layer weights and biases have seen success in practice, but the necessary theoretical justification is lacking. In this paper, we begin to fill this theoretical gap. We provide a...
We introduce a probability distribution, combined with an efficient sampling algorithm, for weights ...
The first part of this thesis develops fundamental limits of deep neural network learning by charact...
International audienceWe study the expressivity of deep neural networks. Measuring a network's compl...
The learning speed of feed-forward neural networks is notoriously slow and has presented a bottlenec...
A Random Vector Functional Link (RVFL) network is a depth-2 neural network with random inner weights...
We define a neural network in infinite dimensional spaces for which we can show the universal approx...
The universal approximation theorem is generalised to uniform convergence on the (noncompact) input ...
Overparameterized neural networks enjoy great representation power on complex data, and more importa...
Significant success of deep learning has brought unprecedented challenges to conventional wisdom in ...
We prove two universal approximation theorems for a range of dropout neural networks. These are feed...
AbstractApproximation properties of the MLP (multilayer feedforward perceptron) model of neural netw...
In this thesis we summarise several results in the literature which show the approximation capabilit...
The paper contains approximation guarantees for neural networks that are trained with gradient flow,...
We study two problems from mathematical signal processing. First, we consider problem of approximate...
This work features an original result linking approximation and optimization theory for deep learnin...
We introduce a probability distribution, combined with an efficient sampling algorithm, for weights ...
The first part of this thesis develops fundamental limits of deep neural network learning by charact...
International audienceWe study the expressivity of deep neural networks. Measuring a network's compl...
The learning speed of feed-forward neural networks is notoriously slow and has presented a bottlenec...
A Random Vector Functional Link (RVFL) network is a depth-2 neural network with random inner weights...
We define a neural network in infinite dimensional spaces for which we can show the universal approx...
The universal approximation theorem is generalised to uniform convergence on the (noncompact) input ...
Overparameterized neural networks enjoy great representation power on complex data, and more importa...
Significant success of deep learning has brought unprecedented challenges to conventional wisdom in ...
We prove two universal approximation theorems for a range of dropout neural networks. These are feed...
AbstractApproximation properties of the MLP (multilayer feedforward perceptron) model of neural netw...
In this thesis we summarise several results in the literature which show the approximation capabilit...
The paper contains approximation guarantees for neural networks that are trained with gradient flow,...
We study two problems from mathematical signal processing. First, we consider problem of approximate...
This work features an original result linking approximation and optimization theory for deep learnin...
We introduce a probability distribution, combined with an efficient sampling algorithm, for weights ...
The first part of this thesis develops fundamental limits of deep neural network learning by charact...
International audienceWe study the expressivity of deep neural networks. Measuring a network's compl...