In a recent paper, the authors introduced the notion of sample width for binary classifiers defined on the set of real numbers. It was shown that the performance of such classifiers could be quantified in terms of this sample width. This paper considers how to adapt the idea of sample width so that it can be applied in cases where the classifiers are defined on some finite metric space. We discuss how to employ a greedy set-covering heuristic to bound generalization error. Then, by relating the learning problem to one involving certain graph-theoretic parameters, we obtain generalization error bounds that depend on the sample width and on measures of `density' of the underlying metric space
In this paper we consider the problem of learning nearest-prototype classifiers in any finite distan...
A half-space over a distance space is a generalization of a half-space in a vector space. An importa...
Abstract. We propose a novel approach for the estimation of the size of training sets that are neede...
In a recent paper [M. Anthony, J. Ratsaby, Maximal width learning of binary functions, Theoretical C...
In M. Anthony and J. Ratsaby. Maximal width learning of binary functions. Theoretical Computer Scien...
This paper concerns learning binary-valued functions defined on, and investigates how a particular t...
AbstractThis paper concerns learning binary-valued functions defined on R, and investigates how a pa...
In a recent paper, the authors introduced the notion of sample width for binary classifier defined o...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
AbstractThe studies of generalization error give possible approaches to estimate the performance of ...
In this paper we propose a general framework to study the generalization properties of binary classi...
International audienceIn this paper we propose a general framework to study the generalization prope...
AbstractGiven a set F of classifiers and a probability distribution over their domain, one can defin...
In this paper we consider the problem of learning nearest-prototype classifiers in any finite distan...
A half-space over a distance space is a generalization of a half-space in a vector space. An importa...
Abstract. We propose a novel approach for the estimation of the size of training sets that are neede...
In a recent paper [M. Anthony, J. Ratsaby, Maximal width learning of binary functions, Theoretical C...
In M. Anthony and J. Ratsaby. Maximal width learning of binary functions. Theoretical Computer Scien...
This paper concerns learning binary-valued functions defined on, and investigates how a particular t...
AbstractThis paper concerns learning binary-valued functions defined on R, and investigates how a pa...
In a recent paper, the authors introduced the notion of sample width for binary classifier defined o...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
AbstractThe studies of generalization error give possible approaches to estimate the performance of ...
In this paper we propose a general framework to study the generalization properties of binary classi...
International audienceIn this paper we propose a general framework to study the generalization prope...
AbstractGiven a set F of classifiers and a probability distribution over their domain, one can defin...
In this paper we consider the problem of learning nearest-prototype classifiers in any finite distan...
A half-space over a distance space is a generalization of a half-space in a vector space. An importa...
Abstract. We propose a novel approach for the estimation of the size of training sets that are neede...