We discuss two classes of convergent algorithms for learning continuous functions (and also regression functions) that are represented by FeedForward Networks (FFN). The first class of algorithms, applicable to networks with unknown weights located only in the output layer, is obtained by utilizing the potential function methods of Aizerman et al. The second class, applicable to general feedforward networks, is obtained by utilizing the classical Robbins-Monro style stochastic approximation methods. Conditions relating the sample sizes to the error bounds are derived for both classes of algorithms using martingale-type inequalities. For concreteness, the discussion is presented in terms of neural networks, but the results are applicable to ...
AbstractWe consider the problem of Learning Neural Networks from samples. The sample size which is s...
This paper reviews some of the recent results in applying the theory of Probably Approximately Corre...
We consider the problem of learning the dependence of one random variable on another, from a finite ...
The authors present a class of efficient algorithms for PAC learning continuous functions and regres...
A learning algorithm for feedforward neural networks is presented that is based on a parameter estim...
The problem of function estimation using feedforward neural networks based on an indpendently and id...
We discuss a model of consistent learning with an additional re-striction on the probability distrib...
Learning algorithms have been used both on feed-forward deterministic networks and on feed-back stat...
Feedforward networks are a class of approximation techniques that can be used to learn to perform so...
In this study, we focus on feed-forward neural networks with a single hidden layer. The research tou...
In recent years, multi-layer feedforward neural networks have been popularly used for pattern classi...
Abstract—We consider the problem of learning the dependence of one random variable on another, from ...
We consider the problem of learning the dependence of one random variable on another, from a finite ...
We consider the problem of learning the dependence of one random variable on another, from a finite ...
We consider the problem of learning the dependence of one random variable on another, from a finite ...
AbstractWe consider the problem of Learning Neural Networks from samples. The sample size which is s...
This paper reviews some of the recent results in applying the theory of Probably Approximately Corre...
We consider the problem of learning the dependence of one random variable on another, from a finite ...
The authors present a class of efficient algorithms for PAC learning continuous functions and regres...
A learning algorithm for feedforward neural networks is presented that is based on a parameter estim...
The problem of function estimation using feedforward neural networks based on an indpendently and id...
We discuss a model of consistent learning with an additional re-striction on the probability distrib...
Learning algorithms have been used both on feed-forward deterministic networks and on feed-back stat...
Feedforward networks are a class of approximation techniques that can be used to learn to perform so...
In this study, we focus on feed-forward neural networks with a single hidden layer. The research tou...
In recent years, multi-layer feedforward neural networks have been popularly used for pattern classi...
Abstract—We consider the problem of learning the dependence of one random variable on another, from ...
We consider the problem of learning the dependence of one random variable on another, from a finite ...
We consider the problem of learning the dependence of one random variable on another, from a finite ...
We consider the problem of learning the dependence of one random variable on another, from a finite ...
AbstractWe consider the problem of Learning Neural Networks from samples. The sample size which is s...
This paper reviews some of the recent results in applying the theory of Probably Approximately Corre...
We consider the problem of learning the dependence of one random variable on another, from a finite ...