학술논문

Automated grading of enlarged perivascular spaces in clinical imaging data of an acute stroke cohort using an interpretable, 3D deep learning framework.
Document Type
Article
Source
Scientific Reports. 1/17/2022, Vol. 12 Issue 1, p1-7. 7p.
Subject
*CEREBRAL small vessel diseases
*DIAGNOSTIC imaging
*BASAL ganglia
*DEEP learning
*STROKE patients
Language
ISSN
2045-2322
Abstract
Enlarged perivascular spaces (EPVS), specifically in stroke patients, has been shown to strongly correlate with other measures of small vessel disease and cognitive impairment at 1 year follow-up. Typical grading of EPVS is often challenging and time consuming and is usually based on a subjective visual rating scale. The purpose of the current study was to develop an interpretable, 3D neural network for grading enlarged perivascular spaces (EPVS) severity at the level of the basal ganglia using clinical-grade imaging in a heterogenous acute stroke cohort, in the context of total cerebral small vessel disease (CSVD) burden. T2-weighted images from a retrospective cohort of 262 acute stroke patients, collected in 2015 from 5 regional medical centers, were used for analyses. Patients were given a label of 0 for none-to-mild EPVS (< 10) and 1 for moderate-to-severe EPVS (≥ 10). A three-dimensional residual network of 152 layers (3D-ResNet-152) was created to predict EPVS severity and 3D gradient class activation mapping (3DGradCAM) was used for visual interpretation of results. Our model achieved an accuracy 0.897 and area-under-the-curve of 0.879 on a hold-out test set of 15% of the total cohort (n = 39). 3DGradCAM showed areas of focus that were in physiologically valid locations, including other prevalent areas for EPVS. These maps also suggested that distribution of class activation values is indicative of the confidence in the model's decision. Potential clinical implications of our results include: (1) support for feasibility of automated of EPVS scoring using clinical-grade neuroimaging data, potentially alleviating rater subjectivity and improving confidence of visual rating scales, and (2) demonstration that explainable models are critical for clinical translation. [ABSTRACT FROM AUTHOR]