In this paper, we propose to regularize deep neural nets with a new type of multitask learning where the auxiliary task is formed by agglomerating classes into super-classes. As such, it is possible to jointly train the network on the class-based classification problem AND super-class based classification problem. We study this in settings where the training set is small and show that , concurrently with a regularization scheme of randomly reinitializing weights in deeper layers, this leads to competitive results on the ImageNet and Caltech-256 datasets and state-of-the-art results on CIFAR-100
We propose multirate training of neural networks: partitioning neural network parameters into "fast"...
Multi-task learning (MTL) is a common paradigm that seeks to improve the generalization performance ...
A feed-forward neural network artificial model, or multilayer perceptron (MLP), learns input samples...
In this paper, we propose to regularize deep neural nets with a new type of multitask learning where...
Adversarial training has been shown to regularize deep neural networks in addition to increasing the...
Despite powerful representation ability, deep neural networks (DNNs) are prone to over-fitting, beca...
Stacking-based deep neural network (S-DNN) is aggregated with pluralities of basic learning modules,...
© 1979-2012 IEEE. Recent years have witnessed the success of deep neural networks in dealing with a ...
In this study, we explore the training of monolithic deep neural net-works in an effective manner. O...
© Copyright 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rig...
Recent years have witnessed the success of deep neural networks in dealing with a plenty of practica...
Most uses of machine learning today involve training a model from scratch for a particular task, or ...
This research project investigates the role of key factors that led to the resurgence of deep CNNs ...
University of Technology Sydney. Faculty of Engineering and Information Technology.Recent years have...
Deep learning enables automatically discovering useful, multistage, task-specific features from high...
We propose multirate training of neural networks: partitioning neural network parameters into "fast"...
Multi-task learning (MTL) is a common paradigm that seeks to improve the generalization performance ...
A feed-forward neural network artificial model, or multilayer perceptron (MLP), learns input samples...
In this paper, we propose to regularize deep neural nets with a new type of multitask learning where...
Adversarial training has been shown to regularize deep neural networks in addition to increasing the...
Despite powerful representation ability, deep neural networks (DNNs) are prone to over-fitting, beca...
Stacking-based deep neural network (S-DNN) is aggregated with pluralities of basic learning modules,...
© 1979-2012 IEEE. Recent years have witnessed the success of deep neural networks in dealing with a ...
In this study, we explore the training of monolithic deep neural net-works in an effective manner. O...
© Copyright 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rig...
Recent years have witnessed the success of deep neural networks in dealing with a plenty of practica...
Most uses of machine learning today involve training a model from scratch for a particular task, or ...
This research project investigates the role of key factors that led to the resurgence of deep CNNs ...
University of Technology Sydney. Faculty of Engineering and Information Technology.Recent years have...
Deep learning enables automatically discovering useful, multistage, task-specific features from high...
We propose multirate training of neural networks: partitioning neural network parameters into "fast"...
Multi-task learning (MTL) is a common paradigm that seeks to improve the generalization performance ...
A feed-forward neural network artificial model, or multilayer perceptron (MLP), learns input samples...