Multi-view learning makes use of diverse models arising from multiple sources of input or different feature subsets for the same task. For example, a given natural language processing task can combine evidence from models arising from character, morpheme, lexical, or phrasal views. The most common strategy with multi-view learning, especially popular in the neural network community, is to unify multiple representations into one unified vector through concatenation, averaging, or pooling, and then build a single-view model on top of the unified representation. As an alternative, we examine whether building one model per view and then unifying the different models can lead to improvements, especially in low-resource scenarios. More specifical...
Recent research has shown promise in multilingual modeling, demonstrating how a single model is capa...
In the field of machine learning, semi-supervised learning (SSL) occupies the middle ground, between...
Co-training is a famous semi-supervised learning paradigm exploiting unlabeled data with two views. ...
Obtaining high-quality and up-to-date labeled data can be difficult in many real-world machine learn...
abstract: Multi-view learning, a subfield of machine learning that aims to improve model performance...
We investigate the problem of learning document classifiers in a multilingual setting, from collecti...
We propose a multi-view learning approach called co-labeling which is applicable for several machine...
We propose a multi-view learning approach called co-labeling which is applicable for several machine...
© 2015 IEEE. It is often expensive and time consuming to collect labeled training samples in many re...
The lack of labeled data is one of the main obstacles to the application of machine learning algorit...
This study discusses the effect of semi-supervised learning in combination with pretrained language ...
Training a model with limited data is an essential task for machine learning and visual recognition....
International audienceWe address the problem of learning classifiers when observations have multiple...
Abstract. In many real-world applications there are usually abundant unlabeled data but the amount o...
Within a situation where Semi-Supervised Learning (SSL) is available to exploit unlabeled data, this...
Recent research has shown promise in multilingual modeling, demonstrating how a single model is capa...
In the field of machine learning, semi-supervised learning (SSL) occupies the middle ground, between...
Co-training is a famous semi-supervised learning paradigm exploiting unlabeled data with two views. ...
Obtaining high-quality and up-to-date labeled data can be difficult in many real-world machine learn...
abstract: Multi-view learning, a subfield of machine learning that aims to improve model performance...
We investigate the problem of learning document classifiers in a multilingual setting, from collecti...
We propose a multi-view learning approach called co-labeling which is applicable for several machine...
We propose a multi-view learning approach called co-labeling which is applicable for several machine...
© 2015 IEEE. It is often expensive and time consuming to collect labeled training samples in many re...
The lack of labeled data is one of the main obstacles to the application of machine learning algorit...
This study discusses the effect of semi-supervised learning in combination with pretrained language ...
Training a model with limited data is an essential task for machine learning and visual recognition....
International audienceWe address the problem of learning classifiers when observations have multiple...
Abstract. In many real-world applications there are usually abundant unlabeled data but the amount o...
Within a situation where Semi-Supervised Learning (SSL) is available to exploit unlabeled data, this...
Recent research has shown promise in multilingual modeling, demonstrating how a single model is capa...
In the field of machine learning, semi-supervised learning (SSL) occupies the middle ground, between...
Co-training is a famous semi-supervised learning paradigm exploiting unlabeled data with two views. ...