Show simple item record

dc.contributor.authorChari, Shruthi
dc.contributor.authorSeneviratne, Oshani
dc.contributor.authorGruen, Daniel
dc.contributor.authorForeman, Morgan
dc.contributor.authorDas, Amar
dc.contributor.authorMcGuinness, Deborah
dc.description.abstractExplainability has been a goal for Artificial Intelligence (AI) systems since their conception, with the need for explainability growing as more complex AI models are increasingly used in critical, high-stakes settings such as healthcare. Explanations have often added to an AI system in a non-principled, post-hoc manner. With greater adoption of these systems and emphasis on user-centric explainability, there is a need for a structured representation that treats explainability as a primary consideration, mapping end user needs to specific explanation types and the system's AI capabilities. We design an explanation ontology to model both the role of explanations, accounting for the system and user attributes in the process, and the range of different literature-derived explanation types. We indicate how the ontology can support user requirements for explanations in the domain of healthcare. We evaluate our ontology with a set of competency questions geared towards a system designer who might use our ontology to decide which explanation types to include, given a combination of users' needs and a system's capabilities, both in system design settings and in real-time operations. Through the use of this ontology, system designers will be able to make informed choices on which explanations AI systems can and should provide.
dc.subjectHealth Empowerment by Analytics, Learning, and Semantics (HEALS)
dc.titleExplanation Ontology: A Model of Explanations for User-Centered AI

Files in this item


There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record