This paper aims to compare different reg-ularization strategies to address a com-mon phenomenon, severe overfitting, in embedding-based neural networks for NLP. We chose two widely studied neu-ral models and tasks as our testbed. We tried several frequently applied or newly proposed regularization strategies, including penalizing weights (embeddings excluded), penalizing embeddings, re-embedding words, and dropout. We also emphasized on incremental hyperparame-ter tuning, and combining different regu-larizations. The results provide a picture on tuning hyperparameters for neural NLP models.
In this thesis, we present Regularized Learning with Feature Networks (RLFN), an approach for regula...
Machine-learning (ML) methods often utilized in applications like computer vision, recommendation sy...
In recent years, neural machine translation (NMT) has become the dominant approach in automated tran...
This paper aims to compare different regularization strategies to address a common phenomenon, sever...
Overfitting is a common problem in neural networks. This report uses a simple neural network to do s...
Unsupervised neural networks, such as restricted Boltzmann machines (RBMs) and deep belief networks ...
We explore the roles and interactions of the hyper-parameters governing regularization, and propose ...
Despite powerful representation ability, deep neural networks (DNNs) are prone to over-fitting, beca...
Neural language models (LMs) based on recurrent neural networks (RNN) are some of the most successfu...
Overfitting is one issue that deep learning faces in particular. It leads to highly accurate classif...
Numerous approaches address over-fitting in neural networks: by imposing a penalty on the parameters...
This paper aims to investigate the limits of deep learning by exploring the issue of overfitting in ...
Abstract. In this paper we address the important problem of optimizing regularization parameters in ...
We analyze neural network architectures that yield state of the art results on named entity recognit...
Despite the fact that modern deep neural networks have the ability to memorize (almost) the entire t...
In this thesis, we present Regularized Learning with Feature Networks (RLFN), an approach for regula...
Machine-learning (ML) methods often utilized in applications like computer vision, recommendation sy...
In recent years, neural machine translation (NMT) has become the dominant approach in automated tran...
This paper aims to compare different regularization strategies to address a common phenomenon, sever...
Overfitting is a common problem in neural networks. This report uses a simple neural network to do s...
Unsupervised neural networks, such as restricted Boltzmann machines (RBMs) and deep belief networks ...
We explore the roles and interactions of the hyper-parameters governing regularization, and propose ...
Despite powerful representation ability, deep neural networks (DNNs) are prone to over-fitting, beca...
Neural language models (LMs) based on recurrent neural networks (RNN) are some of the most successfu...
Overfitting is one issue that deep learning faces in particular. It leads to highly accurate classif...
Numerous approaches address over-fitting in neural networks: by imposing a penalty on the parameters...
This paper aims to investigate the limits of deep learning by exploring the issue of overfitting in ...
Abstract. In this paper we address the important problem of optimizing regularization parameters in ...
We analyze neural network architectures that yield state of the art results on named entity recognit...
Despite the fact that modern deep neural networks have the ability to memorize (almost) the entire t...
In this thesis, we present Regularized Learning with Feature Networks (RLFN), an approach for regula...
Machine-learning (ML) methods often utilized in applications like computer vision, recommendation sy...
In recent years, neural machine translation (NMT) has become the dominant approach in automated tran...