Within the natural language processing (NLP) community, active learning has been widely investigated and applied in order to alleviate the annotation bottleneck faced by developers of new NLP systems and technologies. This paper presents the first theoretical analysis of stopping active learning based on stabilizing predictions (SP). The analysis has revealed three elements that are central to the success of the SP method: (1) bounds on Cohen’s Kappa agreement between successively trained models impose bounds on differences in F-measure performance of the models; (2) since the stop set does not have to be labeled, it can be made large in practice, helping to guarantee that the results transfer to previously unseen streams of examples at tes...
Traditional supervised machine learning algorithms are expected to have access to a large corpus of ...
A common obstacle preventing the rapid deployment of supervised machine learning algorithms is the l...
This thesis studies active learning and confidence-rated prediction, and the interplay between these...
A survey of existing methods for stopping active learning (AL) reveals the needs for methods that ar...
A survey of existing methods for stopping ac-tive learning (AL) reveals the needs for meth-ods that ...
As supervised machine learning methods are increasingly used in language technology, the need for h...
Active learning reduces annotation costs for supervised learning by concentrating labelling efforts ...
Institute for Communicating and Collaborative SystemsActive learning reduces annotation costs for su...
In this paper, we address the problem of knowing when to stop the process of active learning. We pro...
BACKGROUND: Active learning is a powerful tool for guiding an experimentation process. Instead of do...
Recent breakthroughs made by deep learning rely heavily on a large number of annotated samples. To o...
As supervised machine learning methods are increasingly used in language technology, the need for hi...
[Abstract] Non-active adaptive sampling is a way of building machine learning models from a training...
Supervised learning deals with the inference of a distribution over an output or label space conditi...
In many settings in practice it is expensive to obtain labeled data while unlabeled data is abundant...
Traditional supervised machine learning algorithms are expected to have access to a large corpus of ...
A common obstacle preventing the rapid deployment of supervised machine learning algorithms is the l...
This thesis studies active learning and confidence-rated prediction, and the interplay between these...
A survey of existing methods for stopping active learning (AL) reveals the needs for methods that ar...
A survey of existing methods for stopping ac-tive learning (AL) reveals the needs for meth-ods that ...
As supervised machine learning methods are increasingly used in language technology, the need for h...
Active learning reduces annotation costs for supervised learning by concentrating labelling efforts ...
Institute for Communicating and Collaborative SystemsActive learning reduces annotation costs for su...
In this paper, we address the problem of knowing when to stop the process of active learning. We pro...
BACKGROUND: Active learning is a powerful tool for guiding an experimentation process. Instead of do...
Recent breakthroughs made by deep learning rely heavily on a large number of annotated samples. To o...
As supervised machine learning methods are increasingly used in language technology, the need for hi...
[Abstract] Non-active adaptive sampling is a way of building machine learning models from a training...
Supervised learning deals with the inference of a distribution over an output or label space conditi...
In many settings in practice it is expensive to obtain labeled data while unlabeled data is abundant...
Traditional supervised machine learning algorithms are expected to have access to a large corpus of ...
A common obstacle preventing the rapid deployment of supervised machine learning algorithms is the l...
This thesis studies active learning and confidence-rated prediction, and the interplay between these...