학술논문

Toward Transparent AI for Neurological Disorders: A Feature Extraction and Relevance Analysis Framework
Document Type
Periodical
Source
IEEE Access Access, IEEE. 12:37731-37743 2024
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Feature extraction
Convolutional neural networks
Neurological diseases
Medical diagnostic imaging
Neurons
Alzheimer's disease
Brain cancer
Tumors
Epilepsy
Artificial intelligence
Explainable AI
Alzheimer’s disease
brain tumor
deep learning
epilepsy
explainable artificial intelligence
feature extraction
Language
ISSN
2169-3536
Abstract
The lack of interpretability and transparency in deep learning architectures has raised concerns among professionals in various industries and academia. One of the main concerns is the ability to trust these architectures’ without being provided any insight into the decision-making process. Despite these concerns, researchers continue to explore new models and architectures that do not incorporate explainability into their main construct. In the medical industry, it is crucial to provide explanations of any decision, as patient health outcomes can vary according to decisions made. Furthermore, in medical research, incorrectly diagnosed neurological conditions are a high-cost error that contributes significantly to morbidity and mortality. Therefore, the development of new transparent techniques for neurological conditions is critical. This paper presents a novel Autonomous Relevance Technique for an Explainable neurological disease prediction framework called ART-Explain. The proposed technique autonomously extracts features from within the deep learning architecture to create novel visual explanations of the resulting prediction. ART-Explain is an end-to-end autonomous explainable technique designed to present an intuitive and holistic overview of a prediction made by a deep learning classifier. To evaluate the effectiveness of our approach, we benchmark it with other state-of-the-art techniques using three data sets of neurological disorders. The results demonstrate the generalisation capabilities of our technique and its suitability for real-world applications. By providing transparent insights into the decision-making process, ART-Explain can improve end-user trust and enable a better understanding of classification outcomes in the detection of neurological diseases.