학술논문

TcT: Temporal and channel Transformer for EEG-based Emotion Recognition
Document Type
Conference
Source
2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS) CBMS Computer-Based Medical Systems (CBMS), 2022 IEEE 35th International Symposium on. :366-371 Jul, 2022
Subject
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Robotics and Control Systems
Signal Processing and Analysis
Emotion recognition
Neuroscience
Databases
Computational modeling
Brain modeling
Transformers
Electroencephalography
Emotion Recognition
Electroencephalogram (EEG)
Self-attention
Transformer
Language
ISSN
2372-9198
Abstract
In recent years, Electroencephalogram (EEG)-based emotion recognition has developed rapidly and gained increasing attention in the field of brain-computer interface. Relevant studies in the neuroscience domain have shown that various emotional states may activate differently in brain regions and time points. Though the EEG signals have the characteristics of high temporal resolution and strong global correlation, the low signal-to-noise ratio and much redundant information bring challenges to the fast emotion recognition. To cope with the above problem, we propose a Temporal and channel Transformer (TcT) model for emotion recognition, which is directly applied to the raw preprocessed EEG data. In the model, we propose a TcT self-attention mechanism that simultaneously captures temporal and channel dependencies. The sliding window weight sharing strategy is designed to gradually refine the features from coarse time granularity, and reduce the complexity of the attention calculation. The original signal is passed between layers through the residual structure to integrate the features of different layers. We conduct experiments on the DEAP database to verify the effectiveness of the proposed model. The results show that the model achieves better classification performance in less time and with fewer resources than state-of-the-art methods.