In this paper we present a case study of co-training to image classification. We consider two scene classification tasks: indoors vs. outdoors and animals vs. sports. The results show that co-training with Naïve Bayes using 8-10 labelled examples obtained only 1.2-1.5 % lower classification accu-racy than Naïve Bayes trained on the full labelled version of the training set (138 examples in task 1 and 827 examples in task 2). Co-training was found to be sensitive to the choice of base classifier, with Naïve Bayes outperforming Random Forest. We also propose a simple co-training modification based on the different inductive basis of classification algo-rithms and show that it is a promising approach. 1
The good performances of most classical learning algorithms are generally founded on high quality tr...
Modern image classification methods are based on supervised learning algorithms that require labeled...
Co-Training is a weakly supervised learning paradigm in which the redundancy of the learning task is...
We explore the use of co-training to improve the performance of image classification in the setting ...
We consider the problem of using a large unlabeled sample to boost performance of a learning algorit...
Co-training can learn from datasets having a small number of labelled examples and a large number of...
While deep learning strategies achieve outstanding results in computer vision tasks, one issue remai...
Co-training is a semi supervised learning method that effectively learns from a pool of labeled and ...
We consider the problem of using a large unlabeled sample to boost performance of a learning algorit...
In many machine learning problems, unlabeled examples are abundant, while labeled examples are often...
Co-training is a well-known semi-supervised learning technique that applies two basic learners to tr...
In this work, we look at the problem of multi-class image classification in a semi-supervised learni...
Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two d...
Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two d...
Abstract. Recognizing visual scenes and activities is challenging: often visual cues alone are ambig...
The good performances of most classical learning algorithms are generally founded on high quality tr...
Modern image classification methods are based on supervised learning algorithms that require labeled...
Co-Training is a weakly supervised learning paradigm in which the redundancy of the learning task is...
We explore the use of co-training to improve the performance of image classification in the setting ...
We consider the problem of using a large unlabeled sample to boost performance of a learning algorit...
Co-training can learn from datasets having a small number of labelled examples and a large number of...
While deep learning strategies achieve outstanding results in computer vision tasks, one issue remai...
Co-training is a semi supervised learning method that effectively learns from a pool of labeled and ...
We consider the problem of using a large unlabeled sample to boost performance of a learning algorit...
In many machine learning problems, unlabeled examples are abundant, while labeled examples are often...
Co-training is a well-known semi-supervised learning technique that applies two basic learners to tr...
In this work, we look at the problem of multi-class image classification in a semi-supervised learni...
Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two d...
Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two d...
Abstract. Recognizing visual scenes and activities is challenging: often visual cues alone are ambig...
The good performances of most classical learning algorithms are generally founded on high quality tr...
Modern image classification methods are based on supervised learning algorithms that require labeled...
Co-Training is a weakly supervised learning paradigm in which the redundancy of the learning task is...