학술논문

Multimodal 3D Deep Learning for Early Diagnosis of Alzheimer’s Disease
Document Type
Periodical
Source
IEEE Access Access, IEEE. 12:46278-46289 2024
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Feature extraction
Three-dimensional displays
Magnetic resonance imaging
Convolution
Solid modeling
Proteins
Deep learning
Computer aided diagnosis
Convolutional neural networks
Image classification
Positron emission tomography
convolutional neural networks
deep learning
dementia
image classification
magnetic resonance imaging
positron emission tomography
Language
ISSN
2169-3536
Abstract
Alzheimer’s disease (AD) is a neurodegenerative disease that affects the elderly and leads to cognitive decline and memory loss. Treatments for stopping or slowing the progression of AD have not been discovered yet; therefore, delaying the progression of AD is the only option, which makes early diagnosis of AD crucial. Additionally, although $\text{A}\beta $ plaques and tau proteins are considered the causes of early AD, few studies have used this information to diagnose early AD. In this study, a middle-fusion multimodal model is proposed for the diagnosis of early AD. The proposed multimodal model extracts features without loss using a depthwise separable convolution block without an activation function. Subsequently, middle fusion is applied using mix skip connection and sharing weight convolution blocks, both designed to learn the complex relationships between modalities. In contrast to other studies, the proposed approach has three main novelties. 1) A middle-fusion multimodal model is proposed for the early diagnosis of AD. 2) The proposed model is evaluated using the entire ADNI series, including T1-weighted magnetic resonance imaging (T1w MRI) and 18F-FluoroDeoxyGlucose positron emission tomography (FDG PET) from the ADNI1 dataset, as well as $\text{A}\beta $ PET and tau protein PET from ADNI2 and ADNI3 datasets. 3) A novel region-of-interest (ROI) extraction method is proposed for the hippocampus, middle temporal, and inferior temporal regions, which are known to be affected in the early stages of AD. In the experimental results, the proposed multimodal model achieved a balanced accuracy of 1.00, for the task of Alzheimer’s disease vs cognitive normal (CN) and 0.76 for the task of mild cognitive impairment vs cognitive normal.