학술논문

Classification of Motor Imagery Based on Multi-Scale Feature Extraction and the Channel-Temporal Attention Module
Document Type
Periodical
Source
IEEE Transactions on Neural Systems and Rehabilitation Engineering IEEE Trans. Neural Syst. Rehabil. Eng. Neural Systems and Rehabilitation Engineering, IEEE Transactions on. 31:3075-3085 2023
Subject
Bioengineering
Computing and Processing
Robotics and Control Systems
Signal Processing and Analysis
Communication, Networking and Broadcast Technologies
Feature extraction
Electroencephalography
Brain modeling
Task analysis
Data mining
Deep learning
Convolution
Motor imagery
EEG
multi-scale convolution
convolution neural network
attention module
Language
ISSN
1534-4320
1558-0210
Abstract
Motor imagery (MI) is a popular paradigm for controlling electroencephalogram (EEG) based Brain-Computer Interface (BCI) systems. Many methods have been developed to attempt to accurately classify MI-related EEG activity. Recently, the development of deep learning has begun to draw increasing attention in the BCI research community because it does not need to use sophisticated signal preprocessing and can automatically extract features. In this paper, we propose a deep learning model for use in MI-based BCI systems. Our model makes use of a convolutional neural network based on a multi-scale and channel-temporal attention module (CTAM), which called MSCTANN. The multi-scale module is able to extract a large number of features, while the attention module includes both a channel attention module and a temporal attention module, which together allow the model to focus attention on the most important features extracted from the data. The multi-scale module and the attention module are connected by a residual module, which avoids the degradation of the network. Our network model is built from these three core modules, which combine to improve the recognition ability of the network for EEG signals. Our experimental results on three datasets (BCI competition IV 2a, III IIIa and IV 1) show that our proposed method has better performance than other state-of-the-art methods, with accuracy rates of 80.6%, 83.56% and 79.84%. Our model has stable performance in decoding EEG signals and achieves efficient classification performance while using fewer network parameters than other comparable state-of-the-art methods.