Understanding why deep nets can classify data in large dimensions remains a challenge. It has been proposed that they do so by becoming stable to diffeomorphisms, yet existing empirical measurements support that it is often not the case. We revisit this question by defining a maximum-entropy distribution on diffeomorphisms, that allows to study typical diffeomorphisms of a given norm. We confirm that stability toward diffeomorphisms does not strongly correlate to performance on benchmark data sets of images. By contrast, we find that the stability toward diffeomorphisms relative to that of generic transformations $R_f$ correlates remarkably with the test error $\epsilon_t$. It is of order unity at initialization but decreases by several dec...
It is widely believed that the success of deep networks lies in their ability to learn a meaningful ...
The paper reviews and extends an emerging body of theoretical results on deep learning including the...
In many contexts, simpler models are preferable to more complex models and the control of this model...
Understanding why deep nets can classify data in large dimensions remains a challenge. It has been p...
A central question of machine learning is how deep nets manage to learn tasks in high dimensions. An...
Deep learning algorithms are responsible for a technological revolution in a variety oftasks includi...
Deep neural networks have achieved impressive results in many image classification tasks. However, s...
Across scientific and engineering disciplines, the algorithmic pipeline forprocessing and understand...
In this paper we address the issue of output instability of deep neural networks: small perturbation...
We show that the input correlation matrix of typical classification datasets has an eigenspectrum wh...
A main puzzle of deep networks revolves around the absence of overfitting despite overparametrizatio...
Deep neural networks have recently shown impressive classification performance on a diverse set of v...
It is well-known that modern neural networks are vulnerable to adversarial examples. To mitigate thi...
Despite their success, deep networks have been shown to be highly susceptible to perturbations, ofte...
First we present a proof that convolutional neural networks (CNNs) with max-norm regularization, max...
It is widely believed that the success of deep networks lies in their ability to learn a meaningful ...
The paper reviews and extends an emerging body of theoretical results on deep learning including the...
In many contexts, simpler models are preferable to more complex models and the control of this model...
Understanding why deep nets can classify data in large dimensions remains a challenge. It has been p...
A central question of machine learning is how deep nets manage to learn tasks in high dimensions. An...
Deep learning algorithms are responsible for a technological revolution in a variety oftasks includi...
Deep neural networks have achieved impressive results in many image classification tasks. However, s...
Across scientific and engineering disciplines, the algorithmic pipeline forprocessing and understand...
In this paper we address the issue of output instability of deep neural networks: small perturbation...
We show that the input correlation matrix of typical classification datasets has an eigenspectrum wh...
A main puzzle of deep networks revolves around the absence of overfitting despite overparametrizatio...
Deep neural networks have recently shown impressive classification performance on a diverse set of v...
It is well-known that modern neural networks are vulnerable to adversarial examples. To mitigate thi...
Despite their success, deep networks have been shown to be highly susceptible to perturbations, ofte...
First we present a proof that convolutional neural networks (CNNs) with max-norm regularization, max...
It is widely believed that the success of deep networks lies in their ability to learn a meaningful ...
The paper reviews and extends an emerging body of theoretical results on deep learning including the...
In many contexts, simpler models are preferable to more complex models and the control of this model...