UPDF AI

I Think I Get Your Point, AI! The Illusion of Explanatory Depth in Explainable AI

Michael Chromik,Malin Eiband,2 Authors,A. Butz

2021 · DOI: 10.1145/3397481.3450644
International Conference on Intelligent User Interfaces · 130 Citations

TLDR

This work examines if non-technical users of XAI fall for an illusion of explanatory depth when interpreting additive local explanations and how their perception of understanding decreases when it is examined.

Abstract

Unintended consequences of deployed AI systems fueled the call for more interpretability in AI systems. Often explainable AI (XAI) systems provide users with simplifying local explanations for individual predictions but leave it up to them to construct a global understanding of the model behavior. In this work, we examine if non-technical users of XAI fall for an illusion of explanatory depth when interpreting additive local explanations. We applied a mixed methods approach consisting of a moderated study with 40 participants and an unmoderated study with 107 crowd workers using a spreadsheet-like explanation interface based on the SHAP framework. We observed what non-technical users do to form their mental models of global AI model behavior from local explanations and how their perception of understanding decreases when it is examined.