Multi-task learning (MTL) is an established method of inducing bias in neural network learning. It has been used as a method of knowledge transfer in sequential learning tasks, and as a method of overcoming the problem of impoverished training data. A standard problem in neural networks is the establishment of the proper network structure for a given problem. This problem is compounded in the MTL case since the structure of the network can significantly impact the effectiveness of the MTL method. This problem is exacerbated further in the sequential or life-long learning case, as choosing structure a priori for an unknown number of tasks is not possible. In this paper, we explore the use of cascade correlation (CC) as a generative neural ne...
The article examines the question of how learning multiple tasks interacts with neural architectures...
The brain can be viewed as a complex modular structure with features of information processing throu...
Constructive algorithms have proved to be powerful methods for training feedforward neural networks....
Multi-task learning (MTL) is an established method of inducing bias in neural network learning. It h...
Neural network modeling typically ignores the role of knowledge in learning by starting from random ...
Multiple task learning (MTL) neural networks are one of the better documented methods of inductive t...
The Cascade-Correlation learning algorithm constructs a multi-layer artificial neural network as it ...
Abstract: "Cascade-Correlation is a new architecture and supervised learning algorithm for artificia...
The Cascade-Correlation learning algorithm constructs a multi-layer artificial neural network as it ...
Most neural network learning algorithms cannot use knowledge other than what is provided in the trai...
Context-sensitive Multiple Task Learning, or csMTL, is presented as a method of inductive transfer t...
This thesis is divided into two parts: the first examines various extensions to Cascade-Correlation,...
This contribution revisits an earlier discovered observation that the average performance of a pop u...
Deep Learning has demonstrated outstanding performance on several machine learning tasks. These resu...
It is often difficult to predict the optimal neural network size for a particular application. Const...
The article examines the question of how learning multiple tasks interacts with neural architectures...
The brain can be viewed as a complex modular structure with features of information processing throu...
Constructive algorithms have proved to be powerful methods for training feedforward neural networks....
Multi-task learning (MTL) is an established method of inducing bias in neural network learning. It h...
Neural network modeling typically ignores the role of knowledge in learning by starting from random ...
Multiple task learning (MTL) neural networks are one of the better documented methods of inductive t...
The Cascade-Correlation learning algorithm constructs a multi-layer artificial neural network as it ...
Abstract: "Cascade-Correlation is a new architecture and supervised learning algorithm for artificia...
The Cascade-Correlation learning algorithm constructs a multi-layer artificial neural network as it ...
Most neural network learning algorithms cannot use knowledge other than what is provided in the trai...
Context-sensitive Multiple Task Learning, or csMTL, is presented as a method of inductive transfer t...
This thesis is divided into two parts: the first examines various extensions to Cascade-Correlation,...
This contribution revisits an earlier discovered observation that the average performance of a pop u...
Deep Learning has demonstrated outstanding performance on several machine learning tasks. These resu...
It is often difficult to predict the optimal neural network size for a particular application. Const...
The article examines the question of how learning multiple tasks interacts with neural architectures...
The brain can be viewed as a complex modular structure with features of information processing throu...
Constructive algorithms have proved to be powerful methods for training feedforward neural networks....