The good performances of most classical learning algorithms are generally founded on high quality training data, which are clean and unbiased. The availability of such data is however becoming much harder than ever in many real world problems due to the difficulties in collecting large scale unbiased data and precisely labeling them for training. In this paper, we propose a general Contrast Co-learning (CCL) framework to refine the biased and noisy training data when an unbiased yet unlabeled data pool is available. CCL starts with multiple sets of probably biased and noisy training data and trains a set of classifiers individually. Then under the assumption that the confidently classified data samples may have higher probabilities to be co...
Co-training can learn from datasets having a small number of labelled examples and a large number of...
In this paper, we propose a novel co-learning framework (CoSSL) with decoupled representation learni...
The objective of this paper is visual-only self-supervised video representation learning. We make th...
10.1109/ICDM.2010.23Proceedings - IEEE International Conference on Data Mining, ICDM649-65
Recently, Semi-Supervised learning algorithms such as co-training are used in many domains. In co-tr...
Co-training is a well-known semi-supervised learning technique that applies two basic learners to tr...
Contrastive learning (CL), a self-supervised learning approach, can effectively learn visual represe...
Abstract—Co-training is one of the major semi-supervised learning paradigms which iteratively trains...
Co-training is a famous semi-supervised learning paradigm exploiting unlabeled data with two views. ...
Co-training is a famous semi-supervised learning paradigm exploiting unlabeled data with two views. ...
This paper presents a new approach to identifying and eliminating mislabeled training instances for ...
Recent studies have demonstrated that gradient matching-based dataset synthesis, or dataset condensa...
Contrastive learning (CL), a self-supervised learning approach, can effectively learn visual represe...
Co-training is a semi-supervised learning technique used to recover the unlabeled data based on two ...
Contrastive self-supervised learning (SSL) learns an embedding space that maps similar data pairs cl...
Co-training can learn from datasets having a small number of labelled examples and a large number of...
In this paper, we propose a novel co-learning framework (CoSSL) with decoupled representation learni...
The objective of this paper is visual-only self-supervised video representation learning. We make th...
10.1109/ICDM.2010.23Proceedings - IEEE International Conference on Data Mining, ICDM649-65
Recently, Semi-Supervised learning algorithms such as co-training are used in many domains. In co-tr...
Co-training is a well-known semi-supervised learning technique that applies two basic learners to tr...
Contrastive learning (CL), a self-supervised learning approach, can effectively learn visual represe...
Abstract—Co-training is one of the major semi-supervised learning paradigms which iteratively trains...
Co-training is a famous semi-supervised learning paradigm exploiting unlabeled data with two views. ...
Co-training is a famous semi-supervised learning paradigm exploiting unlabeled data with two views. ...
This paper presents a new approach to identifying and eliminating mislabeled training instances for ...
Recent studies have demonstrated that gradient matching-based dataset synthesis, or dataset condensa...
Contrastive learning (CL), a self-supervised learning approach, can effectively learn visual represe...
Co-training is a semi-supervised learning technique used to recover the unlabeled data based on two ...
Contrastive self-supervised learning (SSL) learns an embedding space that maps similar data pairs cl...
Co-training can learn from datasets having a small number of labelled examples and a large number of...
In this paper, we propose a novel co-learning framework (CoSSL) with decoupled representation learni...
The objective of this paper is visual-only self-supervised video representation learning. We make th...