Nneke, Ngozi, Crockett, Keeley ORCID: https://orcid.org/0000-0003-1941-6201 and Latham, Annabel ORCID: https://orcid.org/0000-0002-8410-7950 (2025) Remember Non-Specialists? How effective are XAI explanations in helping non-specialists understand an AI model’s decision? In: IEEE Symposium Series on Computational Intelligence (SSCI) 2025, 17 March 2025 - 20 March 2025, Trondheim, Norway. (In Press)
Accepted Version
File not available for download. Available under License In Copyright. Download (413kB) |
Abstract
Explainable Artificial Intelligence (XAI) can uncover the inner workings of black-box models, enhancing transparency and building trust in AI-driven decision-making. However, there is ongoing debate regarding the effectiveness of XAI explanations specifically, whether they are understandable to users lacking technical knowledge, have low digital literacy or the confidence of a user to question an automated decision based on an AI model outcome. To address these challenges, we propose adapting metrics, adopted from cognitive psychology’s Mental Model approach to assess non-specialist (non-technical) participants’ understanding of two different types (SHAP, example-based) of XAI explanation. Utilizing a Healthcare scenario, we create a random forest model to classify a cancer diagnosis and create a series of explanation types. This paper presents a study to evaluate the effectiveness of these explanation types using metrics including understanding, trust, and perceived usefulness with non-specialist users. The results show that non-specialist users who had received one training session in SHAP trusted the SHAP explanation more than the example-based explanation, however 81% of participants thought that example-based explanations were more useful.
Impact and Reach
Statistics
Additional statistics for this dataset are available via IRStats2.