학술논문

XTSC-Bench: Quantitative Benchmarking for Explainers on Time Series Classification
Document Type
Conference
Source
2023 International Conference on Machine Learning and Applications (ICMLA) ICMLA Machine Learning and Applications (ICMLA), 2023 International Conference on. :1126-1131 Dec, 2023
Subject
Computing and Processing
Engineering Profession
Robotics and Control Systems
Signal Processing and Analysis
Measurement
Systematics
Reviews
Time series analysis
Machine learning
Benchmark testing
Feature extraction
Explainable AI
Time Series Classification
XAI Metrics
Language
ISSN
1946-0759
Abstract
Despite the growing body of work on explainable machine learning in time series classification (TSC), it remains unclear how to evaluate different explainability methods. Resorting to qualitative assessment and user studies to evaluate explainers for TSC is difficult since humans have difficulties understanding the underlying information contained in time series data. Therefore, a systematic review and quantitative comparison of explanation methods to confirm their correctness becomes crucial. While steps to standardized evaluations were taken for tabular, image, and textual data, benchmarking explainability methods on time series is challenging due to a) traditional metrics not being directly applicable, b) implementation and adaption of traditional metrics for time series in the literature vary, and c) varying baseline implementations. This paper proposes XTSC-Bench, a benchmarking tool providing standardized datasets, models, and metrics for evaluating explanation methods on TSC. We analyze 3 perturbation-, 6 gradient- and 2 example-based explanation methods to TSC showing that improvements in the explainers' robustness and reliability are necessary, especially for multivariate data.