The minimum number of misclassifications achievable with afine hyperplanes on a given set of labeled points is a key quantity in both statistics and computational learning theory. However, determining this quantity exactly is essentially NP-hard, c.f. Simon and van Horn (1995). Hence, there is a need to find reasonable approximation procedures. This paper compares three approaches to approximating the minimum number of misclassifications achievable with afine hyperplanes. The first approach is based on the regression depth method of Rousseeuw and Hubert (1999) in linear regression models. We compare the results of the regression depth method with the support vector machine approach proposed by Vapnik (1998), and a heuristic search algorithm
A criterion, based on Bayes' theorem, is described that defines the optimal set of classes (a classi...
Sufficient dimension reduction is popular for reducing data dimensionality without stringent model a...
International audienceThis paper deals with robust regression and subspace estimation and more preci...
The minimum number of misclassifications achievable with affine hyperplanes on a given set of labele...
SIGLEAvailable from TIB Hannover: RR 8460(2000,53) / FIZ - Fachinformationszzentrum Karlsruhe / TIB ...
The regression depth method (RDM) proposed by Rousseeuw and Hubert [RH99] plays an important role in...
We find very tight bounds on the accuracy of a Support Vector Machine classification error within th...
Finding a hyperplane that separates two classes of data points with the minimum number of misclassif...
In this chapter, we present the main classic machine learning methods. A large part of the chapter i...
The problem of extracting a minimal number of data points from a large dataset, in order to generat...
We show that, for any set of n points in d dimensions, there exists a hyperplane with regression dep...
We explore a novel approach to upper bound the misclassification error for problems with data compri...
In this report we show some consequences of the work done by Pontil et al. in [1]. In particular we ...
We study the problem of designing support vector machine (SVM) classifiers that minimize the maximu...
Support vector machine (SVM) is a powerful tool in binary classification, known to attain excellent...
A criterion, based on Bayes' theorem, is described that defines the optimal set of classes (a classi...
Sufficient dimension reduction is popular for reducing data dimensionality without stringent model a...
International audienceThis paper deals with robust regression and subspace estimation and more preci...
The minimum number of misclassifications achievable with affine hyperplanes on a given set of labele...
SIGLEAvailable from TIB Hannover: RR 8460(2000,53) / FIZ - Fachinformationszzentrum Karlsruhe / TIB ...
The regression depth method (RDM) proposed by Rousseeuw and Hubert [RH99] plays an important role in...
We find very tight bounds on the accuracy of a Support Vector Machine classification error within th...
Finding a hyperplane that separates two classes of data points with the minimum number of misclassif...
In this chapter, we present the main classic machine learning methods. A large part of the chapter i...
The problem of extracting a minimal number of data points from a large dataset, in order to generat...
We show that, for any set of n points in d dimensions, there exists a hyperplane with regression dep...
We explore a novel approach to upper bound the misclassification error for problems with data compri...
In this report we show some consequences of the work done by Pontil et al. in [1]. In particular we ...
We study the problem of designing support vector machine (SVM) classifiers that minimize the maximu...
Support vector machine (SVM) is a powerful tool in binary classification, known to attain excellent...
A criterion, based on Bayes' theorem, is described that defines the optimal set of classes (a classi...
Sufficient dimension reduction is popular for reducing data dimensionality without stringent model a...
International audienceThis paper deals with robust regression and subspace estimation and more preci...