학술논문

Towards explaining recommendations through local surrogate models
Document Type
Conference
Source
Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing. :1671-1678
Subject
explanations
factorization machines
recommender systems
transparency
Language
English
Abstract
The increase in sophistication and complexity of recommendation algorithms has turned them into black boxes where the algorithmic reasoning behind the predictions is hard to understand by users. A popular approach for increasing model interpretability in the machine learning community is the Locally Interpretable Model-agnostic Explanations (LIME), which proposes to learn local interpretable models for explaining single predictions of any model. In this paper, we propose an adaptation of LIME for any recommender system. We evaluate our adaptation of LIME on Factorization Machines, a well-known black-box recommender algorithm trained on MovieLens 20M dataset. We compare our approach to a state-of-the-art model-agnostic method based on association rules and show that our proposed adaptation is a promising alternative since it is comparable in terms of fidelity, i.e., can locally mimic the behavior of a complex recommender, and has the additional advantage of enabling different styles of explanations. Finally, we present a case study for investigating the feasibility and limitations of our proposed adaptation.

Online Access