학술논문

Visual Cues for Disrespectful Conversation Analysis
Document Type
Conference
Source
2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII) Affective Computing and Intelligent Interaction (ACII), 2019 8th International Conference on. :580-586 Sep, 2019
Subject
Computing and Processing
Signal Processing and Analysis
Visualization
Face
YouTube
Feature extraction
Labeling
Teleconferencing
Standards
disrespect
visual cue
face and gesture
Language
ISSN
2156-8111
Abstract
Toxic, abusive, or disrespectful behavior analysis is a non-trivial problem previously addressed mostly from the language perspective. In this paper, we present a novel video dataset containing disrespect and non-disrespect labels, and introduce such behavior analysis by using visual cues. The dataset is collected from YouTube news show videos of two-party conversations, in which a host and a guest interact through teleconferencing. Each video is labeled by three trained raters to identify disrespect expressed through face and gesture, voice, and language. By resolving confounding factors, we generate the corresponding pairwise samples of non-disrespect. To particularly show the influence of visual cues in disrespectful interactions, we present 222 labeled clips (duration=974.41(s), mean duration=4.39(s)). We extract and analyze the facial action units (AVs) prevalent in disrespectful behavior. Our result shows statistically significant differences after Bonferroni correction for Inner Brow raise (AV01), Lip Corner Depressor (AV15), and Chin Raiser (AV17). For prediction, we build two classifiers using logistic regression and linear Support Vector Machine with 62.61 % and 61.48 % accuracy, respectively. For an in-depth analysis of overall face and gesture features, we conduct a qualitative analysis using theme extraction. Our qualitative analysis provides further insights on leveraging synchronous and asynchronous features, along with combining text and audio data with visual cues to better detect disrespect.