The low dimensional manifold hypothesis posits that the data found in many applications, such as those involving natural images, lie (approximately) on low dimensional manifolds embedded in a high dimensional Euclidean space. In this setting, a typical neural network defines a function that takes a finite number of vectors in the embedding space as input. However, one often needs to consider evaluating the optimized network at points outside the training distribution. This paper considers the case in which the training data is distributed in a linear subspace of $\mathbb R^d$. We derive estimates on the variation of the learning function, defined by a neural network, in the direction transversal to the subspace. We study the potential regul...
Deep neural networks progressively transform their inputs across multiple processing layers. What ar...
We show that deep ReLU neural network classifiers can see a low-dimensional Riemannian manifold stru...
Understanding how the statistical and geometric properties of neural activations relate to network p...
Convolutional residual neural networks (ConvResNets), though overparameterized, can achieve remarkab...
Across scientific and engineering disciplines, the algorithmic pipeline forprocessing and understand...
We are increasingly confronted with very high dimensional data from speech,images, genomes, and othe...
We consider the problem of data classification where the training set consists of just a few data po...
Understanding the reasons for the success of deep neural networks trained using stochastic gradient-...
<p>Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model lin...
In analyzing complex datasets, it is often of interest to infer lower dimensional structure underlyi...
A fundamental problem in manifold learning is to approximate a functional relationship in a data cho...
Approximation of high-dimensional functions is a challenge for neural networks due to the curse of d...
The lack of crisp mathematical models that capture the structure of real-world data sets is a major ...
The Deep Neural Networks (DNN) have become the main contributor in the field of machine learning (ML...
The extreme fragility of deep neural networks, when presented with tiny perturbations in their input...
Deep neural networks progressively transform their inputs across multiple processing layers. What ar...
We show that deep ReLU neural network classifiers can see a low-dimensional Riemannian manifold stru...
Understanding how the statistical and geometric properties of neural activations relate to network p...
Convolutional residual neural networks (ConvResNets), though overparameterized, can achieve remarkab...
Across scientific and engineering disciplines, the algorithmic pipeline forprocessing and understand...
We are increasingly confronted with very high dimensional data from speech,images, genomes, and othe...
We consider the problem of data classification where the training set consists of just a few data po...
Understanding the reasons for the success of deep neural networks trained using stochastic gradient-...
<p>Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model lin...
In analyzing complex datasets, it is often of interest to infer lower dimensional structure underlyi...
A fundamental problem in manifold learning is to approximate a functional relationship in a data cho...
Approximation of high-dimensional functions is a challenge for neural networks due to the curse of d...
The lack of crisp mathematical models that capture the structure of real-world data sets is a major ...
The Deep Neural Networks (DNN) have become the main contributor in the field of machine learning (ML...
The extreme fragility of deep neural networks, when presented with tiny perturbations in their input...
Deep neural networks progressively transform their inputs across multiple processing layers. What ar...
We show that deep ReLU neural network classifiers can see a low-dimensional Riemannian manifold stru...
Understanding how the statistical and geometric properties of neural activations relate to network p...