Designing for AI Explainability in Clinical Context

No Thumbnail Available
Authors
Gruen, Daniel M.
Chari, Shruthi
Foreman, Morgan A.
Seneviratne, Oshani
Richesson, Rachel
Das, Amar K.
McGuinness, Deborah L.
Issue Date
2021-02
Type
Article
Language
Keywords
Research Projects
Organizational Units
Journal Issue
Alternative Title
Abstract
The growing use of artificial intelligence in medical settings has led to increased interest in AI Explainability (XAI). While research on XAI has largely focused on the goal of increasing users' appropriate trust and application of insights from AI systems, we see intrinsic value in explanations themselves (and the role they play in furthering clinician's understanding of a patient, disease, or system). Our research studies explanations as a core component of bi-directional communication between the user and AI technology. As such, explanations must be understood and evaluated in context, reflecting the specific questions and information needs that arise in actual use. In this paper, we present a framework and approach for identifying XAI needs during the development of human-centered AI. We illustrate this approach through a user study and design prototype, which situated endocrinolo-gists in a clinical setting involving guideline-based diabetes treatment. Our results show the variety of explanation types needed in clinical settings, the usefulness of our approach for identifying these needs early while a system is still being designed , and the importance of keeping humans in the loop during both the development and use of AI systems.
Description
Full Citation
Daniel M. Gruen, Shruthi Chari, Morgan A. Foreman, Oshani Seneviratne, Rachel Richesson, Amar K. Das, Deborah L. McGuinness, Designing for AI Explainability in Clinical Context. AAAI 2021 Workshop Trustworthy AI for Healthcare, February 2021.
Publisher
AAAI
Journal
Volume
Issue
PubMed ID
DOI
ISSN
EISSN