In many contexts, simpler models are preferable to more complex models and the control of this model complexity is the goal for many methods in machine learning such as regularization, hyperparameter tuning and architecture design. In deep learning, it has been difficult to understand the underlying mechanisms of complexity control, since many traditional measures are not naturally suitable for deep neural networks. Here we develop the notion of geometric complexity, which is a measure of the variability of the model function, computed using a discrete Dirichlet energy. Using a combination of theoretical arguments and empirical results, we show that many common training heuristics such as parameter norm regularization, spectral norm regular...
1This work seeks to answer the question: as the (near-) orthogonality of weights is found to be a fa...
We propose Geometric Neural Parametric Models (GNPM), a learned parametric model that takes into acc...
Neural networks are more expressive when they have multiple layers. In turn, conventional training m...
The recent success of high-dimensional models, such as deep neural networks (DNNs), has led many to ...
Across scientific and engineering disciplines, the algorithmic pipeline forprocessing and understand...
A remarkable characteristic of overparameterized deep neural networks (DNNs) is that their accuracy ...
This is the final version. Available from ICLR via the link in this recordDeep neural networks (DNNs...
A main puzzle of deep networks revolves around the absence of overfitting despite overparametrizatio...
The classical statistical learning theory implies that fitting too many parameters leads to overfitt...
In the last decade or so, deep learning has revolutionized entire domains of machine learning. Neura...
The generalization mystery in deep learning is the following: Why do over-parameterized neural netwo...
Among attempts at giving a theoretical account of the success of deep neural networks, a recent line...
Deep learning algorithms are responsible for a technological revolution in a variety oftasks includi...
Regularizing the gradient norm of the output of a neural network is a powerful technique, rediscover...
Large neural networks have proved remarkably effective in modern deep learning practice, even in the...
1This work seeks to answer the question: as the (near-) orthogonality of weights is found to be a fa...
We propose Geometric Neural Parametric Models (GNPM), a learned parametric model that takes into acc...
Neural networks are more expressive when they have multiple layers. In turn, conventional training m...
The recent success of high-dimensional models, such as deep neural networks (DNNs), has led many to ...
Across scientific and engineering disciplines, the algorithmic pipeline forprocessing and understand...
A remarkable characteristic of overparameterized deep neural networks (DNNs) is that their accuracy ...
This is the final version. Available from ICLR via the link in this recordDeep neural networks (DNNs...
A main puzzle of deep networks revolves around the absence of overfitting despite overparametrizatio...
The classical statistical learning theory implies that fitting too many parameters leads to overfitt...
In the last decade or so, deep learning has revolutionized entire domains of machine learning. Neura...
The generalization mystery in deep learning is the following: Why do over-parameterized neural netwo...
Among attempts at giving a theoretical account of the success of deep neural networks, a recent line...
Deep learning algorithms are responsible for a technological revolution in a variety oftasks includi...
Regularizing the gradient norm of the output of a neural network is a powerful technique, rediscover...
Large neural networks have proved remarkably effective in modern deep learning practice, even in the...
1This work seeks to answer the question: as the (near-) orthogonality of weights is found to be a fa...
We propose Geometric Neural Parametric Models (GNPM), a learned parametric model that takes into acc...
Neural networks are more expressive when they have multiple layers. In turn, conventional training m...