Trustable Task Processing Systems

Authors
Glass, Alyssa
McGuinness, Deborah L.
Pinheiro, Paulo
Wolverton, Michael
ORCID
No Thumbnail Available
Other Contributors
Issue Date
2008-08-01
Keywords
Theory and Practice of Accountable Systems
Degree
Terms of Use
Full Citation
Abstract
As personal assistant software matures and assumes more autonomous control of user activities, it becomes more critical that this software can tell the user why it is doing what it is doing, and instill trust in the user that its task knowledge reflects standard practice and is being appropriately applied. Our research focuses broadly on providing infrastructure that may be used to increase trust in intelligent agents. In this paper, we will report on a study we designed to identify factors that influence trust in intelligent adaptive agents. We will then introduce our work on explaining adaptive task processing agents as motivated by the results of the trust study. We will introduce our task execution explanation component and provide examples in the context of a particular adaptive agent named CALO. Key features include (1) an architecture designed for re-use among different task execution systems; (2) a set of introspective predicates and a software wrapper that extracts explanation-relevant information from a task execution systems; (3) a version of the Inference Web explainer for generating formal justifications of task processing and converting them to user-friendly explanations; and (4) a unified framework for explaining results from task execution, learning, and deductive reasoning.
Description
Department
Publisher
KI Journal, Special Issue on Explanation
Relationships
https://tw.rpi.edu/project/TPAS
Access