학술논문

Explainable AI for survival analysis: a median-SHAP approach
Document Type
Working Paper
Source
Subject
Computer Science - Machine Learning
Statistics - Methodology
Statistics - Machine Learning
Language
Abstract
With the adoption of machine learning into routine clinical practice comes the need for Explainable AI methods tailored to medical applications. Shapley values have sparked wide interest for locally explaining models. Here, we demonstrate their interpretation strongly depends on both the summary statistic and the estimator for it, which in turn define what we identify as an 'anchor point'. We show that the convention of using a mean anchor point may generate misleading interpretations for survival analysis and introduce median-SHAP, a method for explaining black-box models predicting individual survival times.
Comment: Accepted to the Interpretable Machine Learning for Healthcare (IMLH) workshop of the ICML 2022 Conference