An ontology-enabled approach for user-centered and knowledge-enabled explanations of ai systems

Loading...
Thumbnail Image
Authors
Chari, Shruthi
Issue Date
2024-08
Type
Electronic thesis
Thesis
Language
en_US
Keywords
Computer science
Research Projects
Organizational Units
Journal Issue
Alternative Title
Abstract
Explainability has been a well-studied problem in the Artificial Intelligence (AI) community through several AI ages, ranging from expert systems to the current deep learning era, to enable AI's safe and robust use. Through the ages, the unique nature of AI approaches and their applications have necessitated the explainability approaches to evolve as well. However, these multiple iterations of explaining AI decisions have all focused on helping humans better understand and analyze the results and workings of AI systems. In this thesis, we seek to further the user-centered explainability sub-field in AI by addressing several challenges around explanations in the current AI era, which is characterized by the availability of many machine learning (ML) explainers and neuro-symbolic AI approaches. Previous research in user-centered explainability has mainly focused on what needs to be explained and less so on implementations for them. Additionally, there are challenges to supporting explanations in a manner that humans can easily interpret due to the lack of a unified framework for different explanation types and methods to support domain knowledge from authoritative literature. We address the three challenges or research questions around user-centered explainability: How can we formally represent explanations with support for interacting AI systems (AI methods in applied ecosystem), additional data sources, and along different dimensions? How useful and feasible are such explanations for clinical settings? Is it feasible to combine explanations from multiple data modalities and AI methods? For the first research question, we design an Explanation Ontology (EO), a general-purpose semantic representation that can represent fifteen different literature-derived explanation types via their system-, interface- and user- related components. We demonstrate the utility of the EO in representing explanations across different use cases, supporting system designers to answer explanation-related questions via a set of competency questions, and categorizing explanations to be of supported explanation types within our ontology. For the second research question, we focus on key explanation dimension, that is, contextual explanation and conduct a case study on supporting contextual explanations from an authoritative knowledge source, clinical practice guidelines (CPGs). Here, we design a clinical question-answering (QA) system to address CPG questions to provide contextual explanations to help clinicians interpret risk prediction scores and their post hoc explanations in a comorbidity risk prediction setting. For the QA system, we leverage large language models (LLMs) and their clinical variants and implement knowledge augmentations to these models to improve semantic coherence of the answers. We evaluate both the feasibility and value of supporting these contextual explanations. For feasibility, we use quantitative metrics to report the performance of the QA system and do so across different model settings and data splits. To evaluate the value of the explanations, we report findings from showing the results of our QA approach to an expert panel of clinicians. Finally, for the last research question, we design a general-purpose and open-source framework, Metaexplainer, capable of providing natural-language explanations to a user question from the several explainer methods registered to generate explanations of a particular type. The Metaexplainer is a three-stage (Decompose, Delegate, and Synthesis) modular framework through which each stage produces intermediate outputs that the next stage ingests. In the Decompose stage, we input user questions and identify what explanation type can best address them and generate actionable machine interpretations; in the Delegate stage, we run explainer methods registered for the identified explanation type and pass filters if any, from the question and finally, in the synthesis stage we generate natural-language explanations along the explanation type template. For the Metaexplainer, we leverage LLMs, the EO, and explainer methods to generate user-centered explanations in response to user questions. We evaluate the Metaexplainer on open-source tabular datasets, but the framework can be applied to other modalities with code adaptations. Overall, through this thesis, we aim to design methods that can support knowledge-enabled explanations across different use cases, accounting for the methods in today's AI era that can generate the supporting components of these explanations and domain knowledge sources that can enhance them. We demonstrate the efficacy of our approach in two clinical use cases as case studies but design our methods to be applied to use cases outside of healthcare as well. By implementing approaches for knowledge-enabled explainability that leverage the strengths of symbolic and neural AI, we take a step towards user-centered explainability to help humans interpret and understand AI decisions from different perspectives.
Description
August2024
School of Science
Full Citation
Publisher
Rensselaer Polytechnic Institute, Troy, NY
Terms of Use
Journal
Volume
Issue
PubMed ID
DOI
ISSN
EISSN
Collections