In this paper, we propose an extension of the Extreme Learning Machine algorithm for Single-hidden Layer Feedforward Neural network training that incorporates Dropout and DropConnect regularization in its optimization process. We show that both types of regularization lead to the same solution for the network output weights calculation, which is adopted by the proposed DropELM network. The proposed algorithm is able to exploit Dropout and DropConnect regularization, without computationally intensive iterative weight tuning. We show that the adoption of such a regularization approach can lead to better solutions for the network output weights. We incorporate the proposed regularization approach in several recently proposed ELM algorithms and...
AbstractDropout is a recently introduced algorithm for training neural networks by randomly dropping...
Dropout as a regularization technique is widely used in fully connected layers while is less effecti...
Extreme learning machine (ELM) has been put forward for single hidden layer feedforward networks. Be...
In order to prevent the overfitting and improve the generalization performance of Extreme Learning M...
We introduce DropConnect, a generalization of Dropout (Hinton et al., 2012), for regular-izing large...
Deep neural nets with a large number of parameters are very powerful machine learning systems. Howev...
Regularization is essential when training large neural networks. As deep neural networks can be math...
Dropout is a popular stochastic regularization technique for deep neural networks that works by rand...
Recent years have witnessed the success of deep neural networks in dealing with a plenty of practica...
© Copyright 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rig...
Neural networks are often over-parameterized and hence benefit from aggressive regularization. Conve...
In recent years, deep neural networks have become the state-of-the art in many machine learning doma...
Recently it has been shown that when training neural networks on a limited amount of data, randomly ...
The undeniable computational power of artificial neural networks has granted the scientific communit...
Dropout as a regularization technique is widely used in fully connected layers while is less effecti...
AbstractDropout is a recently introduced algorithm for training neural networks by randomly dropping...
Dropout as a regularization technique is widely used in fully connected layers while is less effecti...
Extreme learning machine (ELM) has been put forward for single hidden layer feedforward networks. Be...
In order to prevent the overfitting and improve the generalization performance of Extreme Learning M...
We introduce DropConnect, a generalization of Dropout (Hinton et al., 2012), for regular-izing large...
Deep neural nets with a large number of parameters are very powerful machine learning systems. Howev...
Regularization is essential when training large neural networks. As deep neural networks can be math...
Dropout is a popular stochastic regularization technique for deep neural networks that works by rand...
Recent years have witnessed the success of deep neural networks in dealing with a plenty of practica...
© Copyright 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rig...
Neural networks are often over-parameterized and hence benefit from aggressive regularization. Conve...
In recent years, deep neural networks have become the state-of-the art in many machine learning doma...
Recently it has been shown that when training neural networks on a limited amount of data, randomly ...
The undeniable computational power of artificial neural networks has granted the scientific communit...
Dropout as a regularization technique is widely used in fully connected layers while is less effecti...
AbstractDropout is a recently introduced algorithm for training neural networks by randomly dropping...
Dropout as a regularization technique is widely used in fully connected layers while is less effecti...
Extreme learning machine (ELM) has been put forward for single hidden layer feedforward networks. Be...