Neural networks with the Rectified Linear Unit (ReLU) nonlinearity are described by a vector of parameters $\theta$, and realized as a piecewise linear continuous function $R_{\theta}: x \in \mathbb R^{d} \mapsto R_{\theta}(x) \in \mathbb R^{k}$. Natural scalings and permutations operations on the parameters $\theta$ leave the realization unchanged, leading to equivalence classes of parameters that yield the same realization. These considerations in turn lead to the notion of identifiability -- the ability to recover (the equivalence class of) $\theta$ from the sole knowledge of its realization $R_{\theta}$. The overall objective of this paper is to introduce an embedding for ReLU neural networks of any depth, $\Phi(\theta)$, that is invari...
We consider general approximation families encompassing ReLU neural networks. On the one hand, we in...
We can compare the expressiveness of neural networks that use rectified linear units (ReLUs) by the ...
Rectified linear units (ReLUs) have become the main model for the neural units in current deep learn...
International audienceNeural networks with the Rectified Linear Unit (ReLU) nonlinearity are describ...
The possibility for one to recover the parameters-weights and biases-of a neural network thanks to t...
Is a sample rich enough to determine, at least locally, the parameters of a neural network? To answe...
This paper focuses on establishing $L^2$ approximation properties for deep ReLU convolutional neural...
We address the following question: How redundant is the parameterisation of ReLU networks? Specific...
We contribute to a better understanding of the class of functions that can be represented by a neura...
We study the problem of approximating compactly-supported integrable functions while implementing th...
We contribute to a better understanding of the class of functions that is represented by a neural ne...
This paper explores the expressive power of deep neural networks through the framework of function c...
We explore convergence of deep neural networks with the popular ReLU activation function, as the dep...
Injectivity plays an important role in generative models where it enables inference; in inverse prob...
We algorithmically determine the regions and facets of all dimensions of the canonical polyhedral co...
We consider general approximation families encompassing ReLU neural networks. On the one hand, we in...
We can compare the expressiveness of neural networks that use rectified linear units (ReLUs) by the ...
Rectified linear units (ReLUs) have become the main model for the neural units in current deep learn...
International audienceNeural networks with the Rectified Linear Unit (ReLU) nonlinearity are describ...
The possibility for one to recover the parameters-weights and biases-of a neural network thanks to t...
Is a sample rich enough to determine, at least locally, the parameters of a neural network? To answe...
This paper focuses on establishing $L^2$ approximation properties for deep ReLU convolutional neural...
We address the following question: How redundant is the parameterisation of ReLU networks? Specific...
We contribute to a better understanding of the class of functions that can be represented by a neura...
We study the problem of approximating compactly-supported integrable functions while implementing th...
We contribute to a better understanding of the class of functions that is represented by a neural ne...
This paper explores the expressive power of deep neural networks through the framework of function c...
We explore convergence of deep neural networks with the popular ReLU activation function, as the dep...
Injectivity plays an important role in generative models where it enables inference; in inverse prob...
We algorithmically determine the regions and facets of all dimensions of the canonical polyhedral co...
We consider general approximation families encompassing ReLU neural networks. On the one hand, we in...
We can compare the expressiveness of neural networks that use rectified linear units (ReLUs) by the ...
Rectified linear units (ReLUs) have become the main model for the neural units in current deep learn...