학술논문

Multimodal Emotion Recognition Using Classifier Reliability-Based Aggregation
Document Type
Conference
Source
2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC) SMC Systems, Man, and Cybernetics (SMC), 2018 IEEE International Conference on. :135-140 Oct, 2018
Subject
Components, Circuits, Devices and Systems
Computing and Processing
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Emotion recognition
Speech recognition
Acoustics
Reliability
Feature extraction
Speech processing
Support vector machines
Language
ISSN
2577-1655
Abstract
This paper addresses emotion recognition by first individually processing and then aggregating different modes of human communication through a classification and aggregation framework. Specifically, the proposed framework processes speech acoustics, facial expressions, and body language using unimodal emotion classifiers. The speech emotion is classified using a deep neural network (DNN) while facial and body language emotion classifiers are implemented using classifiers implemented through supervised fuzzy adaptive resonance theory. The speech emotion classifier uses acoustic features, the facial emotion classifier uses features based on facial animation parameters (FAP), and body language emotion classifier uses head and hands features. The unimodal evaluations are then aggregated this paper also proposes classifier reliability-based aggregation preferences for the unimodal evaluations. The reliability-based preferences are extracted from the accuracies of the unimodal classifiers for each emotion. The results show that the proposed framework outperforms the existing techniques. Furthermore, because of late fusion, the functionality of the proposed approach is robust to unavailability of all but one of the modes of communication.