학술논문

Task Load Estimation from Multimodal Head-Worn Sensors Using Event Sequence Features
Document Type
Periodical
Author
Source
IEEE Transactions on Affective Computing IEEE Trans. Affective Comput. Affective Computing, IEEE Transactions on. 12(3):622-635 Sep, 2021
Subject
Computing and Processing
Robotics and Control Systems
Signal Processing and Analysis
Task analysis
Speech recognition
Sensors
Brain modeling
Computational modeling
Load modeling
Emotion recognition
Eye activity
speech
head movement
physiological sensing
task load
bag-of-words
topic models
Language
ISSN
1949-3045
2371-9850
Abstract
For longitudinal behavior analysis, task type is an inevitable and important variable. In this article, we propose an event-based behavior modeling approach and employ non-invasive wearable sensing modalities (eye activity, speech and head movement) to recognize task load level under four different task load types. The novelty lies in converting physiological and behavioral signals into meaningful events and utilizing their sequence across multiple modalities to distinguish load levels and types. We evaluated this approach on head-worn sensor data from 24 participants completing four different tasks for recognizing (i) low and high load level for a given task load type, (ii) low and high load level regardless of load type, and (iii) both load level and load type. Findings show that the recognition rate is reasonable in (i), close to chance level in (ii), and well above chance level in (iii) for 8 classes using participant-dependent and -independent schemes. Further, a fusion of the proposed event-based features and conventional continuous features achieved the best or similar performance in most cases. These results suggest that task type needs to be considered when using continuous features and that the proposed event-based modeling paradigm is promising for longitudinal behavior analysis.