International audienceCrowdsourcing platforms enable to propose simple human intelligence tasks to a large number of participants who realise these tasks. The workers often receive a small amount of money or the platforms include some other incentive mechanisms, for example they can increase the workers reputation score, if they complete the tasks correctly. We address the problem of identifying experts among participants, that is, workers, who tend to answer the questions correctly. Knowing who are the reliable workers could improve the quality of knowledge one can extract from responses. As opposed to other works in the literature, we assume that participants can give partial or incomplete responses, in case they are not sure that their a...