학술논문

A Unified Multi-Modality Fusion Framework for Deep Spatio-Spectral-Temporal Feature Learning in Resting-State fMRI Denoising
Document Type
Periodical
Source
IEEE Journal of Biomedical and Health Informatics IEEE J. Biomed. Health Inform. Biomedical and Health Informatics, IEEE Journal of. 28(4):2067-2078 Apr, 2024
Subject
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Signal Processing and Analysis
Feature extraction
Streams
Time series analysis
Integrated circuit modeling
Kernel
Three-dimensional displays
Noise reduction
Resting-state fMRI
denoising
deep learning
convolutional neural network
multi-modality fusion
Language
ISSN
2168-2194
2168-2208
Abstract
Resting-state functional magnetic resonance imaging (rs-fMRI) is a commonly used functional neuroimaging technique to investigate the functional brain networks. However, rs-fMRI data are often contaminated with noise and artifacts that adversely affect the results of rs-fMRI studies. Several machine/deep learning methods have achieved impressive performance to automatically regress the noise-related components decomposed from rs-fMRI data, which are expressed as the pairs of a spatial map and its associated time series. However, most of the previous methods individually analyze each modality of the noise-related components and simply aggregate the decision-level information (or knowledge) extracted from each modality to make a final decision. Moreover, these approaches consider only the limited modalities making it difficult to explore class-discriminative spectral information of noise-related components. To overcome these limitations, we propose a unified deep attentive spatio-spectral-temporal feature fusion framework. We first adopt a learnable wavelet transform module at the input-level of the framework to elaborately explore the spectral information in subsequent processes. We then construct a feature-level multi-modality fusion module to efficiently exchange the information from multi-modality inputs in the feature space. Finally, we design confidence-based voting strategies for decision-level fusion at the end of the framework to make a robust final decision. In our experiments, the proposed method achieved remarkable performance for noise-related component detection on various rs-fMRI datasets.