We study learning from examples in higher-order perceptrons, which can realize polynomially separable rules. We first calculate the storage capacity of random binary patterns. It is found that the storage capacity is a monotonically increasing function of the relative weight parameter of the highest-order monomial term. We also analyse the generalization ability of higher-order perceptrons when they are trained by examples drawn from a realizable rule. Unlike their first-order counterparts, high-order perceptrons are found to exhibit stepwise learning as a function of the number of training examples. © 1998 Taylor and Francis Group, LLC.X11sciescopu
High Order Perceptrons offer an elegant solution to the problem of finding the amount of hidden laye...
Tag der mündlichen Prüfung: One of the most important features of natural as well as artificial ne...
A new algorithm for on-line learning linear-threshold functions is proposed which efficiently combin...
A basic neural model for Boolean computation is examined in the context of learning from examples. T...
We present alternative algorithms that avoid the combinatorial explosion problem, and that emerge ro...
We study the evolution of the generalization ability of a simple linear per-ceptron with N inputs wh...
We study learning from single presentation of examples (incremental or on-line learning) in single-...
Neural networks are widely applied in research and industry. However, their broader application is h...
We consider the sample complexity of concept learning when we classify by using a fixed Boolean func...
AbstractThe focus of the paper is the estimation of the maximum number of states that can be made st...
An input-output map in which the patterns are divided into classes is considered for the perceptron....
Recurrent perceptron classifiers generalize the classical perceptron model. They take into account t...
AbstractWe consider the generalization error of concept learning when using a fixed Boolean function...
A linearly separable Boolean function is learned by a diluted perceptron with optimal stability. A d...
Zero temperature Gibbs learning is considered for a connected committee machine with K hidden units....
High Order Perceptrons offer an elegant solution to the problem of finding the amount of hidden laye...
Tag der mündlichen Prüfung: One of the most important features of natural as well as artificial ne...
A new algorithm for on-line learning linear-threshold functions is proposed which efficiently combin...
A basic neural model for Boolean computation is examined in the context of learning from examples. T...
We present alternative algorithms that avoid the combinatorial explosion problem, and that emerge ro...
We study the evolution of the generalization ability of a simple linear per-ceptron with N inputs wh...
We study learning from single presentation of examples (incremental or on-line learning) in single-...
Neural networks are widely applied in research and industry. However, their broader application is h...
We consider the sample complexity of concept learning when we classify by using a fixed Boolean func...
AbstractThe focus of the paper is the estimation of the maximum number of states that can be made st...
An input-output map in which the patterns are divided into classes is considered for the perceptron....
Recurrent perceptron classifiers generalize the classical perceptron model. They take into account t...
AbstractWe consider the generalization error of concept learning when using a fixed Boolean function...
A linearly separable Boolean function is learned by a diluted perceptron with optimal stability. A d...
Zero temperature Gibbs learning is considered for a connected committee machine with K hidden units....
High Order Perceptrons offer an elegant solution to the problem of finding the amount of hidden laye...
Tag der mündlichen Prüfung: One of the most important features of natural as well as artificial ne...
A new algorithm for on-line learning linear-threshold functions is proposed which efficiently combin...