The performance of information retrieval (IR) systems is commonly evaluated using a test set with known relevance. Crowdsourcing is one method for learning the relevant documents to each query in the test set. However, the quality of relevance learned through crowdsourcing can be questionable, because it uses workers of unknown quality with possible spammers among them. To detect spammers, the authors' algorithm compares judgments between workers; they evaluate their approach by comparing the consistency of crowdsourced ground truth to that obtained from expert annotators and conclude that crowdsourcing can match the quality obtained from the latter
International audienceCrowdsourcing is a way to solve problems that need human contribution. Crowdso...
International audienceCrowdsourcing is a way to solve problems that need human contribution. Crowdso...
Information Retrieval (IR) researchers have often used existing IR evaluation collections and transf...
htmlabstractThe performance of information retrieval (IR) systems is commonly evaluated using a test...
Evaluation is instrumental in the development and management of effective information retrieval syst...
Crowdsourcing is a popular technique to collect large amounts of human-generated labels, such as rel...
Crowdsourcing is a popular technique to collect large amounts of human-generated labels, such as rel...
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the ...
Crowdsourcing is a popular technique to collect large amounts of human-generated labels, such as rel...
Abstract. We consider the problem of acquiring relevance judgements for in-formation retrieval (IR) ...
Crowdsourcing has become an alternative approach to collect relevance judgments at scale thanks to t...
Crowdsourcing has become an alternative approach to collect relevance judgments at scale thanks to t...
Crowdsourcing relevance judgments for the evaluation of search engines is used increasingly to overc...
Test collection is extensively used to evaluate information retrieval systems in laboratory-based ev...
International audienceCrowdsourcing is a way to solve problems that need human contribution. Crowdso...
International audienceCrowdsourcing is a way to solve problems that need human contribution. Crowdso...
International audienceCrowdsourcing is a way to solve problems that need human contribution. Crowdso...
Information Retrieval (IR) researchers have often used existing IR evaluation collections and transf...
htmlabstractThe performance of information retrieval (IR) systems is commonly evaluated using a test...
Evaluation is instrumental in the development and management of effective information retrieval syst...
Crowdsourcing is a popular technique to collect large amounts of human-generated labels, such as rel...
Crowdsourcing is a popular technique to collect large amounts of human-generated labels, such as rel...
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the ...
Crowdsourcing is a popular technique to collect large amounts of human-generated labels, such as rel...
Abstract. We consider the problem of acquiring relevance judgements for in-formation retrieval (IR) ...
Crowdsourcing has become an alternative approach to collect relevance judgments at scale thanks to t...
Crowdsourcing has become an alternative approach to collect relevance judgments at scale thanks to t...
Crowdsourcing relevance judgments for the evaluation of search engines is used increasingly to overc...
Test collection is extensively used to evaluate information retrieval systems in laboratory-based ev...
International audienceCrowdsourcing is a way to solve problems that need human contribution. Crowdso...
International audienceCrowdsourcing is a way to solve problems that need human contribution. Crowdso...
International audienceCrowdsourcing is a way to solve problems that need human contribution. Crowdso...
Information Retrieval (IR) researchers have often used existing IR evaluation collections and transf...