학술논문

Aggregation of pairwise comparisons with reduction of biases
Document Type
Working Paper
Source
Subject
Computer Science - Machine Learning
Statistics - Machine Learning
Language
Abstract
We study the problem of ranking from crowdsourced pairwise comparisons. Answers to pairwise tasks are known to be affected by the position of items on the screen, however, previous models for aggregation of pairwise comparisons do not focus on modeling such kind of biases. We introduce a new aggregation model factorBT for pairwise comparisons, which accounts for certain factors of pairwise tasks that are known to be irrelevant to the result of comparisons but may affect workers' answers due to perceptual reasons. By modeling biases that influence workers, factorBT is able to reduce the effect of biased pairwise comparisons on the resulted ranking. Our empirical studies on real-world data sets showed that factorBT produces more accurate ranking from crowdsourced pairwise comparisons than previously established models.
Comment: presented at 2019 ICML Workshop on Human in the Loop Learning (HILL 2019), Long Beach, USA