학술논문

Explainable agency: human preferences for simple or complex explanations
Document Type
Working Paper
Source
Subject
Computer Science - Human-Computer Interaction
Language
Abstract
Research in cognitive psychology has established that whether people prefer simpler explanations to complex ones is context dependent, but the question of `simple vs. complex' becomes critical when an artificial agent seeks to explain its decisions or predictions to humans. We present a model for abstracting causal reasoning chains for the purpose of explanation. This model uses a set of rules to progressively abstract different types of causal information in causal proof traces. We perform online studies using 123 Amazon MTurk participants and with five industry experts over two domains: maritime patrol and weather prediction. We found participants' satisfaction with generated explanations was based on the consistency of relationships among the causes (coherence) that explain an event; and that the important question is not whether people prefer simple or complex explanations, but what types of causal information are relevant to individuals in specific contexts.