Foundations of Explainable Knowledge-Enabled Systems

Authors
Chari, Shruthi
Seneviratne, Oshani
Gruen, Daniel M.
McGuinness, Deborah L.
ORCID
Loading...
Thumbnail Image
Other Contributors
Issue Date
2020-04
Keywords
KG4XAI, Explainable Knowledge-Enabled Systems, Historical Evolution
Degree
Terms of Use
Attribution-NoDerivs 3.0 United States
Full Citation
Chari, S., Seneviratne, O., Gruen, D.M.,McGuinness, D. L.,2020. Foundations of Explainable Knowledge-Enabled Systems. In Ilaria Tiddi, Freddy Lecue, Pascal Hitzler (eds.), Knowledge Graphs for eXplainable AI --Foundations.
Abstract
Explainability has been an important goal since the early days of Artificial Intelligence. Several approaches for producing explanations have been developed. However, many of these approaches were tightly coupled with the capabilities of the artificial intelligence systems at the time. With the proliferation of AI-enabled systems in sometimes critical settings, there is a need for them to be explainable to end-users and decision-makers. We present a historical overview of explainable artificial intelligence systems, with a focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains. Additionally, borrowing from the strengths of past approaches and identifying gaps needed to make explanations user- and context-focused, we propose new definitions for explanations and explainable knowledge-enabled systems.
Description
Department
Publisher
IOS Press
Relationships
Access