Directions for Explainable Knowledge-Enabled Systems

Authors
Chari, Shruthi
Gruen, Daniel M.
Seneviratne, Oshani
McGuinness, Deborah L.
ORCID
Loading...
Thumbnail Image
Other Contributors
Issue Date
2020
Keywords
Degree
Terms of Use
Attribution-NonCommercial-NoDerivs 3.0 United States
Full Citation
Shruthi Chari, Daniel M. Gruen, Oshani Seneviratne, Deborah L. McGuinness. Directions for Explainable Knowledge-Enabled Systems,. In: Ilaria Tiddi, Freddy Lecue, Pascal Hitzler (eds.), Knowledge Graphs for eXplainable AI -- Foundations, Applications and Challenges. Studies on the Semantic Web, IOS Press, Amsterdam, 2020
Abstract
Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for today's artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system designers in their generation and prioritization of requirements and further help generate explanations that are better aligned to users' and situational needs.
Description
Department
Publisher
IOS Press
Relationships
Access