Two learning algorithms, Neural Nets and Function Decomposition, are tested on a set of real-world problems of varying complexity and compared on two measures of generalization - a PAC-like measure called SAM and an average of the error over the learning curve, called ERR2. Function Decomposition generally performs better on both measures. Although it would be tempting to declare Function Decomposition the better algorithm, several considerations such as runtime and problem complexity distribution make such a conclusion tentative. 1 INTRODUCTION Inductive learning can be defined as the problem of filling in the missing values of a partially specified function based on the specified values [Michalski et al., 1983]. For theoretical purposes,...
This paper discusses within the framework of computational learning theory the current state of know...
Despite enormous progress in machine learning, artificial neural networks still lag behind brains in...
A linearly separable Boolean function is learned by a diluted perceptron with optimal stability. A d...
\Ve describe a series of careful llumerical experiments which measure the average generalization cap...
Problem decomposition and divide-and-conquer strategies have been proposed to improve the performanc...
<p>(A): Learning speed when , or , and . The bar graphs and error bars depict sample means and stand...
This thesis presents a new theory of generalization in neural network types of learning machines. Th...
Neural Networks (NN) can be trained to perform tasks such as image and handwriting recognition, cred...
Neural Networks (NN) can be trained to perform tasks such as image and handwriting recognition, cred...
Abstract. The relationship between generalization ability, neural net-work size and function complex...
In "decomposition/reconstruction" strategy, we can solve a complex problem by 1) decomposing the pro...
In this paper, we bound the generalization error of a class of Radial Basis Function networks, for...
In "decomposition/reconstruction" strategy, we can solve a complex problem by 1) decomposing the pro...
We outline a differential theory of learning for statistical pattern classification. When applied to...
Theoretical and computational justification is given for improved generalization when the training s...
This paper discusses within the framework of computational learning theory the current state of know...
Despite enormous progress in machine learning, artificial neural networks still lag behind brains in...
A linearly separable Boolean function is learned by a diluted perceptron with optimal stability. A d...
\Ve describe a series of careful llumerical experiments which measure the average generalization cap...
Problem decomposition and divide-and-conquer strategies have been proposed to improve the performanc...
<p>(A): Learning speed when , or , and . The bar graphs and error bars depict sample means and stand...
This thesis presents a new theory of generalization in neural network types of learning machines. Th...
Neural Networks (NN) can be trained to perform tasks such as image and handwriting recognition, cred...
Neural Networks (NN) can be trained to perform tasks such as image and handwriting recognition, cred...
Abstract. The relationship between generalization ability, neural net-work size and function complex...
In "decomposition/reconstruction" strategy, we can solve a complex problem by 1) decomposing the pro...
In this paper, we bound the generalization error of a class of Radial Basis Function networks, for...
In "decomposition/reconstruction" strategy, we can solve a complex problem by 1) decomposing the pro...
We outline a differential theory of learning for statistical pattern classification. When applied to...
Theoretical and computational justification is given for improved generalization when the training s...
This paper discusses within the framework of computational learning theory the current state of know...
Despite enormous progress in machine learning, artificial neural networks still lag behind brains in...
A linearly separable Boolean function is learned by a diluted perceptron with optimal stability. A d...