Impact of noisy annotators’ reliability in a crowdsourcing system performance
Document typeConference report
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Rights accessRestricted access - publisher's policy
Crowdsourcing is a powerful tool to harness citizen assessments in some complex decision tasks. When multiple annotators provide their individual labels a more reliable collective decision is obtained if the individual reliability parameters are incorporated in the decision making procedure. The well-known Maximum A Posteriori (MAP) rule weights the individual labels in proportion to the annotators’ reliability. In this work we analyze how the crowdsourcing system performance is degraded with the use of noisy annotators’ reliability parameters and we derive an alternative MAP based rule to be applied when these parameters are neither known nor even estimated by the decision system. We also derive analytical expected error rates and their upper bounds obtained by each rule as a useful tool to estimate the number of necessary annotators in the collective decision system depending on the level of noise present in the estimated reliability parameters.
CitationMargarita Cabrera-Bean, Diaz, C., Vidal, J. Impact of noisy annotators’ reliability in a crowdsourcing system performance. A: European Signal Processing Conference. "2016 24th European Signal Processing Conference (EUSIPCO) took place 28 August-2 September 2016 in Budapest, Hungary". Budapest: Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 1-5.