International audienceWe study the fundamental limits to the expressive power of neural networks. Given two sets $F$, $G$ of real-valued functions, we first prove a general lower bound on how well functions in $F$ can be approximated in $L^p(\mu)$ norm by functions in $G$, for any $p \geq 1$ and any probability measure $\mu$. The lower bound depends on the packing number of $F$, the range of $F$, and the fat-shattering dimension of $G$. We then instantiate this bound to the case where $G$ corresponds to a piecewise-polynomial feed-forward neural network, and describe in details the application to two sets $F$: Hölder balls and multivariate monotonic functions. Beside matching (known or new) upper bounds up to log factors, our lower bounds s...
In this work we discuss the problem of selecting suitable approximators from families of parameteriz...
We contribute to a better understanding of the class of functions that is represented by a neural ne...
Recently there has been much interest in understanding why deep neural networks are preferred to sha...
International audienceWe study the fundamental limits to the expressive power of neural networks. Gi...
A feedforward neural net with d input neurons and with a single hidden layer of n neurons is given b...
We consider the approximation of smooth multivariate functions in C(IR d ) by feedforward neural n...
We consider the approximation of smooth multivariate functions in C(R^d) by feedforward neural netwo...
AbstractApproximation properties of the MLP (multilayer feedforward perceptron) model of neural netw...
A feedforward neural net with d input neurons and with a single hidden layer of n neurons is given b...
Abstract—The problem of approximating functions by neural networks using incremental algorithms is s...
We calculate lower bounds on the size of sigmoidal neural networks that approximate continuous funct...
We consider neural network approximation spaces that classify functions according to the rate at whi...
We consider neural network approximation spaces that classify functions according to the rate at whi...
Abstract We calculate lower bounds on the size of sigmoidal neural networks that approximate continu...
In this work we discuss the problem of selecting suitable approximators from families of parameteriz...
In this work we discuss the problem of selecting suitable approximators from families of parameteriz...
We contribute to a better understanding of the class of functions that is represented by a neural ne...
Recently there has been much interest in understanding why deep neural networks are preferred to sha...
International audienceWe study the fundamental limits to the expressive power of neural networks. Gi...
A feedforward neural net with d input neurons and with a single hidden layer of n neurons is given b...
We consider the approximation of smooth multivariate functions in C(IR d ) by feedforward neural n...
We consider the approximation of smooth multivariate functions in C(R^d) by feedforward neural netwo...
AbstractApproximation properties of the MLP (multilayer feedforward perceptron) model of neural netw...
A feedforward neural net with d input neurons and with a single hidden layer of n neurons is given b...
Abstract—The problem of approximating functions by neural networks using incremental algorithms is s...
We calculate lower bounds on the size of sigmoidal neural networks that approximate continuous funct...
We consider neural network approximation spaces that classify functions according to the rate at whi...
We consider neural network approximation spaces that classify functions according to the rate at whi...
Abstract We calculate lower bounds on the size of sigmoidal neural networks that approximate continu...
In this work we discuss the problem of selecting suitable approximators from families of parameteriz...
In this work we discuss the problem of selecting suitable approximators from families of parameteriz...
We contribute to a better understanding of the class of functions that is represented by a neural ne...
Recently there has been much interest in understanding why deep neural networks are preferred to sha...