학술논문

BubblEX: An Explainable Deep Learning Framework for Point-Cloud Classification
Document Type
article
Source
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol 15, Pp 6571-6587 (2022)
Subject
Artificial intelligence (AI)
deep learning
ex- plainable artificial intelligence (XAI)
explainability
point cloud
Ocean engineering
TC1501-1800
Geophysics. Cosmic physics
QC801-809
Language
English
ISSN
2151-1535
Abstract
Point-cloud data are nowadays one of the major data sources for describing our environment. Recently, deep architectures have been proposed as a key step in understanding and retrieving semantic information. Despite the great contribution of deep learning in this field, the explainability of these models for 3-D data is still fairly unexplored. Explainability, identified as a potential weakness of deep neural networks (DNNs), can help researchers against skepticism, considering that these models are far from being self-explanatory. Although literature provides many examples on the exploitation of explainable artificial intelligence approaches with 2-D data, only a few studies have attempted to investigate it for 3-D DNNs. To overcome these limitations, BubblEX is proposed here, a novel multimodal fusion framework to learn the 3-D point features. BubblEX framework comprises two stages: “Visualization Module” for the visualization of features learned from the network in its hidden layers and “Interpretability Module,” which aims at describing how the neighbor points are involved in the feature extraction. For our experiments, dynamic graph convolutional neural network has been used, trained on Modelnet40 dataset. The developed framework extends a method for obtaining saliency maps from image data, to deal with 3-D point-cloud data, allowing the analysis, comparison, and contrasting of multiple features. Besides, it permits the generation of visual explanations from any DNN-based network for 3-D point-cloud classification without requiring architectural changes or retraining. Our findings will be extremely useful for both scientists and nonexperts in understanding and improving future AI-based models.