How to train deep neural networks (DNNs) to generalize well is a central concern in deep learning, especially for severely overparameterized networks nowadays. In this paper, we propose an effective method to improve the model generalization by additionally penalizing the gradient norm of loss function during optimization. We demonstrate that confining the gradient norm of loss function could help lead the optimizers towards finding flat minima. We leverage the first-order approximation to efficiently implement the corresponding gradient to fit well in the gradient descent framework. In our experiments, we confirm that when using our methods, generalization performance of various models could be improved on different datasets. Also, we show...
The remarkable practical success of deep learning has revealed some major surprises from a theoretic...
Optimization is the key component of deep learning. Increasing depth, which is vital for reaching a...
To fully uncover the great potential of deep neural networks (DNNs), various learning algorithms hav...
Empirical studies show that gradient-based methods can learn deep neural networks (DNNs) with very g...
Empirical studies show that gradient-based methods can learn deep neural networks (DNNs) with very g...
The general features of the optimization problem for the case of overparametrized nonlinear networks...
In the past decade, neural networks have demonstrated impressive performance in supervised learning....
The generalization mystery in deep learning is the following: Why do over-parameterized neural netwo...
© 2020 National Academy of Sciences. All rights reserved. While deep learning is successful in a num...
A main puzzle of deep networks revolves around the absence of overfitting despite overparametrizatio...
The classical statistical learning theory implies that fitting too many parameters leads to overfitt...
© 2020, The Author(s). Overparametrized deep networks predict well, despite the lack of an explicit ...
© 2020, The Author(s). Overparametrized deep networks predict well, despite the lack of an explicit ...
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical ques...
In the last decade or so, deep learning has revolutionized entire domains of machine learning. Neura...
The remarkable practical success of deep learning has revealed some major surprises from a theoretic...
Optimization is the key component of deep learning. Increasing depth, which is vital for reaching a...
To fully uncover the great potential of deep neural networks (DNNs), various learning algorithms hav...
Empirical studies show that gradient-based methods can learn deep neural networks (DNNs) with very g...
Empirical studies show that gradient-based methods can learn deep neural networks (DNNs) with very g...
The general features of the optimization problem for the case of overparametrized nonlinear networks...
In the past decade, neural networks have demonstrated impressive performance in supervised learning....
The generalization mystery in deep learning is the following: Why do over-parameterized neural netwo...
© 2020 National Academy of Sciences. All rights reserved. While deep learning is successful in a num...
A main puzzle of deep networks revolves around the absence of overfitting despite overparametrizatio...
The classical statistical learning theory implies that fitting too many parameters leads to overfitt...
© 2020, The Author(s). Overparametrized deep networks predict well, despite the lack of an explicit ...
© 2020, The Author(s). Overparametrized deep networks predict well, despite the lack of an explicit ...
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical ques...
In the last decade or so, deep learning has revolutionized entire domains of machine learning. Neura...
The remarkable practical success of deep learning has revealed some major surprises from a theoretic...
Optimization is the key component of deep learning. Increasing depth, which is vital for reaching a...
To fully uncover the great potential of deep neural networks (DNNs), various learning algorithms hav...