We prove a Quantitative Functional Central Limit Theorem for one-hidden-layer neural networks with generic activation function. The rates of convergence that we establish depend heavily on the smoothness of the activation function, and they range from logarithmic in non-differentiable cases such as the Relu to $\sqrt{n}$ for very regular activations. Our main tools are functional versions of the Stein-Malliavin approach; in particular, we exploit heavily a quantitative functional central limit theorem which has been recently established by Bourguin and Campese (2020)
Machine learning, and in particular neural network models, have revolutionized fields such as image,...
We define a neural network in infinite dimensional spaces for which we can show the universal approx...
In this paper we study shallow neural network functions which are linear combinations of composition...
We rigorously prove a central limit theorem for neural network models with a single hidden layer. Th...
In this paper we provide explicit upper bounds on some distances between the (law of the) output of ...
Given any deep fully connected neural network, initialized with random Gaussian parameters, we bound...
The thesis regards the quantitative convergence of randomly initialized fully connected deep neural ...
Deep neural networks, as a powerful system to represent high dimensional complex functions, play a k...
This paper studies the construction and approximation of neural network operators with a centered be...
The learning speed of feed-forward neural networks is notoriously slow and has presented a bottlenec...
A Random Vector Functional Link (RVFL) network is a depth-2 neural network with random inner weights...
The learning speed of feed-forward neural networks is notoriously slow and has presented a bottlenec...
This paper deals with the determination of the rate of convergence to the unit of perturbed Kantorov...
This chapter deals with the determination of the rate of convergence to the unit of Perturbed Kantor...
We develop a mathematically rigorous framework for multilayer neural networks in the mean field regi...
Machine learning, and in particular neural network models, have revolutionized fields such as image,...
We define a neural network in infinite dimensional spaces for which we can show the universal approx...
In this paper we study shallow neural network functions which are linear combinations of composition...
We rigorously prove a central limit theorem for neural network models with a single hidden layer. Th...
In this paper we provide explicit upper bounds on some distances between the (law of the) output of ...
Given any deep fully connected neural network, initialized with random Gaussian parameters, we bound...
The thesis regards the quantitative convergence of randomly initialized fully connected deep neural ...
Deep neural networks, as a powerful system to represent high dimensional complex functions, play a k...
This paper studies the construction and approximation of neural network operators with a centered be...
The learning speed of feed-forward neural networks is notoriously slow and has presented a bottlenec...
A Random Vector Functional Link (RVFL) network is a depth-2 neural network with random inner weights...
The learning speed of feed-forward neural networks is notoriously slow and has presented a bottlenec...
This paper deals with the determination of the rate of convergence to the unit of perturbed Kantorov...
This chapter deals with the determination of the rate of convergence to the unit of Perturbed Kantor...
We develop a mathematically rigorous framework for multilayer neural networks in the mean field regi...
Machine learning, and in particular neural network models, have revolutionized fields such as image,...
We define a neural network in infinite dimensional spaces for which we can show the universal approx...
In this paper we study shallow neural network functions which are linear combinations of composition...