AbstractWe present a fairly general method for constructing classes of functions of finite scale-sensitive dimension (the scale-sensitive dimension is a generalization of the Vapnik–Chervonenkis dimension to real-valued functions). The construction is as follows: start from a classFof functions of finite VC dimension, take the convex hull coFofF, and then take the closurecoFof coFin an appropriate sense. As an example, we study in more detail the case whereFis the class of threshold functions. It is shown thatcoFincludes two important classes of functions: •neural networks with one hidden layer and bounded output weights; •the so-calledΓclass of Barron, which was shown to satisfy a number of interesting approximation and closure properties....
Abstract. The Vapnik-Chervonenkis (VC) dimension plays an important role in statistical learning the...
AbstractWe describe the configuration of an infinite set V of vectors in Rs, s ⩾ 1, for which the cl...
We define a neural network in infinite dimensional spaces for which we can show the universal approx...
AbstractWe present a fairly general method for constructing classes of functions of finite scale-sen...
In this paper, we introduce the discretized-Vapnik-Chervonenkis (VC) dimension for studying the comp...
In this paper, we introduce the discretized-Vapnik-Chervonenkis (VC) dimension for studying the comp...
AbstractThe degree of approximation of infinite-dimensional function classes using finite n-dimensio...
The Vapnik-Chervonenkis dimension VC-dimension(N) of a neural net N with n input nodes is defined as...
We show that sums of separable functions achieve similar error bound as single layer neural networks...
AbstractWe present a new general-purpose algorithm for learning classes of [0, 1]-valued functions i...
AbstractThis paper shows that neural networks which use continuous activation functions have VC dime...
AbstractIn the PAC-learning model, the Vapnik-Chervonenkis (VC) dimension plays the key role to esti...
We present a new general-purpose algorithm for learning classes of [0, 1]-valued functions in a gene...
AbstractA new proof of a result due to Vapnik is given. Its implications for the theory of PAC learn...
The Vapnik-Chervonenkis dimension has proven to be of great use in the theoretical study of generali...
Abstract. The Vapnik-Chervonenkis (VC) dimension plays an important role in statistical learning the...
AbstractWe describe the configuration of an infinite set V of vectors in Rs, s ⩾ 1, for which the cl...
We define a neural network in infinite dimensional spaces for which we can show the universal approx...
AbstractWe present a fairly general method for constructing classes of functions of finite scale-sen...
In this paper, we introduce the discretized-Vapnik-Chervonenkis (VC) dimension for studying the comp...
In this paper, we introduce the discretized-Vapnik-Chervonenkis (VC) dimension for studying the comp...
AbstractThe degree of approximation of infinite-dimensional function classes using finite n-dimensio...
The Vapnik-Chervonenkis dimension VC-dimension(N) of a neural net N with n input nodes is defined as...
We show that sums of separable functions achieve similar error bound as single layer neural networks...
AbstractWe present a new general-purpose algorithm for learning classes of [0, 1]-valued functions i...
AbstractThis paper shows that neural networks which use continuous activation functions have VC dime...
AbstractIn the PAC-learning model, the Vapnik-Chervonenkis (VC) dimension plays the key role to esti...
We present a new general-purpose algorithm for learning classes of [0, 1]-valued functions in a gene...
AbstractA new proof of a result due to Vapnik is given. Its implications for the theory of PAC learn...
The Vapnik-Chervonenkis dimension has proven to be of great use in the theoretical study of generali...
Abstract. The Vapnik-Chervonenkis (VC) dimension plays an important role in statistical learning the...
AbstractWe describe the configuration of an infinite set V of vectors in Rs, s ⩾ 1, for which the cl...
We define a neural network in infinite dimensional spaces for which we can show the universal approx...