학술논문

Is Your Toxicity My Toxicity? Exploring the Impact of Rater Identity on Toxicity Annotation
Document Type
Working Paper
Source
Subject
Computer Science - Human-Computer Interaction
Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Machine Learning
Language
Abstract
Machine learning models are commonly used to detect toxicity in online conversations. These models are trained on datasets annotated by human raters. We explore how raters' self-described identities impact how they annotate toxicity in online comments. We first define the concept of specialized rater pools: rater pools formed based on raters' self-described identities, rather than at random. We formed three such rater pools for this study--specialized rater pools of raters from the U.S. who identify as African American, LGBTQ, and those who identify as neither. Each of these rater pools annotated the same set of comments, which contains many references to these identity groups. We found that rater identity is a statistically significant factor in how raters will annotate toxicity for identity-related annotations. Using preliminary content analysis, we examined the comments with the most disagreement between rater pools and found nuanced differences in the toxicity annotations. Next, we trained models on the annotations from each of the different rater pools, and compared the scores of these models on comments from several test sets. Finally, we discuss how using raters that self-identify with the subjects of comments can create more inclusive machine learning models, and provide more nuanced ratings than those by random raters.
Comment: Proceedings of ACM in Human Computer Interaction in ACM Conference On Computer- Supported Cooperative Work And Social Computing CSCW 2022