Directions for Explainable Knowledge-Enabled Systems

Loading...
Thumbnail Image
Authors
Chari, Shruthi
Gruen, Daniel M.
Seneviratne, Oshani
McGuinness, Deborah L.
Issue Date
2020
Type
Article
Language
Keywords
Research Projects
Organizational Units
Journal Issue
Alternative Title
Abstract
Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for today's artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system designers in their generation and prioritization of requirements and further help generate explanations that are better aligned to users' and situational needs.
Description
Full Citation
Shruthi Chari, Daniel M. Gruen, Oshani Seneviratne, Deborah L. McGuinness. Directions for Explainable Knowledge-Enabled Systems,. In: Ilaria Tiddi, Freddy Lecue, Pascal Hitzler (eds.), Knowledge Graphs for eXplainable AI -- Foundations, Applications and Challenges. Studies on the Semantic Web, IOS Press, Amsterdam, 2020
Publisher
IOS Press
Journal
Volume
Issue
PubMed ID
DOI
ISSN
EISSN