NAACL 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical TurkEfforts to automatically acquire world knowledge from text suffer from the lack of an easy means of evaluating the resulting knowledge. We describe initial experiments using Mechanical Turk to crowdsource evaluation to non-experts for little cost, resulting in a collection of factoids with associated quality judgements. We describe the method of acquiring usable judgements from the public and the impact of such large-scale evaluation on the task of knowledge acquisition
Crowdsourcing platforms such as Amazon’s Mechani-cal Turk (AMT) provide inexpensive and scalable wor...
International audienceThis article is a position paper about crowdsourced microworking systems and e...
Amazon Mechanical Turk, an online marketplace designed for crowdsourcing tasks to other people for c...
This paper describes a framework for evaluation of spoken dialogue systems. Typically, evaluation of...
This paper addresses the manual evaluation of Machine Translation (MT) quality by means of crowdso...
This paper investigates the feasibility of using crowd-sourcing services for the hu- man assessment ...
Many Artificial Intelligence tasks need large amounts of commonsense knowledge. Because obtaining th...
Crowdsourcing has recently been attracting increasing attention as a promising means of collecting l...
Abstract. This article is a position paper about Amazon Mechanical Turk, the use of which has been s...
One of the major bottlenecks in the development of data-driven AI Systems is the cost of reliable hu...
International audienceThis article is a position paper about crowdsourced microworking systems and e...
This article is a position paper about crowdsourced microworking systems and especially Amazon Mecha...
ccb cs jhu edu Manual evaluation of translation quality is generally thought to be excessively time ...
Researchers have increasingly turned to Amazon Mechanical Turk (AMT) to crowdsource speech data, pre...
The emergence of crowdsourcing as a commonly used approach to collect vast quantities of human asses...
Crowdsourcing platforms such as Amazon’s Mechani-cal Turk (AMT) provide inexpensive and scalable wor...
International audienceThis article is a position paper about crowdsourced microworking systems and e...
Amazon Mechanical Turk, an online marketplace designed for crowdsourcing tasks to other people for c...
This paper describes a framework for evaluation of spoken dialogue systems. Typically, evaluation of...
This paper addresses the manual evaluation of Machine Translation (MT) quality by means of crowdso...
This paper investigates the feasibility of using crowd-sourcing services for the hu- man assessment ...
Many Artificial Intelligence tasks need large amounts of commonsense knowledge. Because obtaining th...
Crowdsourcing has recently been attracting increasing attention as a promising means of collecting l...
Abstract. This article is a position paper about Amazon Mechanical Turk, the use of which has been s...
One of the major bottlenecks in the development of data-driven AI Systems is the cost of reliable hu...
International audienceThis article is a position paper about crowdsourced microworking systems and e...
This article is a position paper about crowdsourced microworking systems and especially Amazon Mecha...
ccb cs jhu edu Manual evaluation of translation quality is generally thought to be excessively time ...
Researchers have increasingly turned to Amazon Mechanical Turk (AMT) to crowdsource speech data, pre...
The emergence of crowdsourcing as a commonly used approach to collect vast quantities of human asses...
Crowdsourcing platforms such as Amazon’s Mechani-cal Turk (AMT) provide inexpensive and scalable wor...
International audienceThis article is a position paper about crowdsourced microworking systems and e...
Amazon Mechanical Turk, an online marketplace designed for crowdsourcing tasks to other people for c...