The problem of how to effectively implement k-fold cross-validation for support vector machines is considered. Indeed, despite the fact that this selection criterion is widely used due to its reasonable requirements in terms of computational resources and its good ability in identifying a well performing model, it is not clear how one should employ the committee of classifiers coming from the k folds for the task of on-line classification. Three methods are here described and tested, based respectively on: averaging, random choice and majority voting. Each of these methods is tested on a wide range of data-sets for different fold settings
This paper proposes a new approach to classification reliability. The key idea is to maintain versi...
5 pages, 6 figures. Contribution to the proceedings of the 17th International workshop on Advanced C...
We address the problem of model selection for Support Vector Machine (SVM) classification. For fixed...
The problem of how to effectively implement k-fold cross-validation for support vector machines is c...
In this paper, we review the k\u2013Fold Cross Validation (KCV) technique, applied to the Support Ve...
The K-fold Cross Validation (KCV) technique is one of the most used approaches by practitioners for ...
This project investigates m-fold cross-validation algorithms for automatic selection of k with k-nea...
In the machine learning field the performance of a classifier is usually measured in terms of predic...
Measuring a larger number of variables simultaneously becomes more and more easy and thus widespread...
In this chapter, we revise several methods for SVM model selection, deriving from different approach...
In recent years, an enormous amount of research has been carried out on support vector machines (SVM...
Support vector machines for classification have the advantage that the curse of dimension-ality is c...
Support vector machines for classification have the advantage that the curse of dimensionality is ci...
We propose several novel methods for enhancing the multi-class SVMs by applying the generalization p...
Cross-validation is an established technique for estimating the accuracy of a classifier and is nor...
This paper proposes a new approach to classification reliability. The key idea is to maintain versi...
5 pages, 6 figures. Contribution to the proceedings of the 17th International workshop on Advanced C...
We address the problem of model selection for Support Vector Machine (SVM) classification. For fixed...
The problem of how to effectively implement k-fold cross-validation for support vector machines is c...
In this paper, we review the k\u2013Fold Cross Validation (KCV) technique, applied to the Support Ve...
The K-fold Cross Validation (KCV) technique is one of the most used approaches by practitioners for ...
This project investigates m-fold cross-validation algorithms for automatic selection of k with k-nea...
In the machine learning field the performance of a classifier is usually measured in terms of predic...
Measuring a larger number of variables simultaneously becomes more and more easy and thus widespread...
In this chapter, we revise several methods for SVM model selection, deriving from different approach...
In recent years, an enormous amount of research has been carried out on support vector machines (SVM...
Support vector machines for classification have the advantage that the curse of dimension-ality is c...
Support vector machines for classification have the advantage that the curse of dimensionality is ci...
We propose several novel methods for enhancing the multi-class SVMs by applying the generalization p...
Cross-validation is an established technique for estimating the accuracy of a classifier and is nor...
This paper proposes a new approach to classification reliability. The key idea is to maintain versi...
5 pages, 6 figures. Contribution to the proceedings of the 17th International workshop on Advanced C...
We address the problem of model selection for Support Vector Machine (SVM) classification. For fixed...