We discuss a model of consistent learning with an additional re-striction on the probability distribution of training samples, the target concept and hypothesis class. We show that the model pro-vides a significant improvement on the upper bounds of sample complexity, i.e. the minimal number of random training samples allowing a selection of the hypothesis with a predefined accuracy and confidence. Further, we show that the model has the poten-tial for providing a finite sample complexity even in the case of infinite VC-dimension as well as for a sample complexity below VC-dimension. This is achieved by linking sample complexity to an "average " number of implement able dichotomies of a training sample rather than the maximal size...
We introduce a new notion of algorithmic stability, which we call training stability. We show that t...
By making assumptions on the probability distribution of the potentials in a feed-forward neural net...
We present a unified framework for a number of different ways of failing to generalize properly. Du...
Feedforward networks are a class of approximation techniques that can be used to learn to perform so...
We present a new algorithm for general reinforcement learning where the true environment is known to...
We discuss two classes of convergent algorithms for learning continuous functions (and also regressi...
AbstractThis paper applies the theory of Probably Approximately Correct (PAC) learning to multiple o...
This paper applies the theory of Probably Approximately Correct (PAC) learning to multiple output fe...
AbstractValiant's protocol for learning is extended to the case where the distribution of the exampl...
AbstractThis paper applies the theory of Probably Approximately Correct (PAC) learning to multiple o...
We present a new algorithm for general reinforcement learning where the true environment is known to...
Feedforward networks together with their training algorithms are a class of regression techniques th...
In this work, we study how the selection of examples affects the learn-ing procedure in a boolean ne...
This paper applies the theory of probably approximately correct (PAC) learning to multiple-output fe...
We consider some problems in learning with respect to a fixed distribution. We introduce two new not...
We introduce a new notion of algorithmic stability, which we call training stability. We show that t...
By making assumptions on the probability distribution of the potentials in a feed-forward neural net...
We present a unified framework for a number of different ways of failing to generalize properly. Du...
Feedforward networks are a class of approximation techniques that can be used to learn to perform so...
We present a new algorithm for general reinforcement learning where the true environment is known to...
We discuss two classes of convergent algorithms for learning continuous functions (and also regressi...
AbstractThis paper applies the theory of Probably Approximately Correct (PAC) learning to multiple o...
This paper applies the theory of Probably Approximately Correct (PAC) learning to multiple output fe...
AbstractValiant's protocol for learning is extended to the case where the distribution of the exampl...
AbstractThis paper applies the theory of Probably Approximately Correct (PAC) learning to multiple o...
We present a new algorithm for general reinforcement learning where the true environment is known to...
Feedforward networks together with their training algorithms are a class of regression techniques th...
In this work, we study how the selection of examples affects the learn-ing procedure in a boolean ne...
This paper applies the theory of probably approximately correct (PAC) learning to multiple-output fe...
We consider some problems in learning with respect to a fixed distribution. We introduce two new not...
We introduce a new notion of algorithmic stability, which we call training stability. We show that t...
By making assumptions on the probability distribution of the potentials in a feed-forward neural net...
We present a unified framework for a number of different ways of failing to generalize properly. Du...