Trust in AutoML: exploring information needs for establishing trust in automated machine learning systems

No Thumbnail Available
Authors
Drozdal, Jaimie
Weisz, Justin
Wang, Dakuo
Dass, Gaurav
Yao, Bingsheng
Zhao, Changruo
Muller, Michael
Ju, Lin
Su, Hui
Issue Date
2020-03-17
Type
Article
Language
Keywords
Research Projects
Organizational Units
Journal Issue
Alternative Title
Abstract
We explore trust in a relatively new area of data science: Automated Machine Learning (AutoML). In AutoML, AI methods are used to generate and optimize machine learning models by automatically engineering features, selecting models, and optimizing hyperparameters. In this paper, we seek to understand what kinds of information influence data scientists' trust in the models produced by AutoML? We operationalize trust as a willingness to deploy a model produced using automated methods. We report results from three studies - qualitative interviews, a controlled experiment, and a card-sorting task - to understand the information needs of data scientists for establishing trust in AutoML systems. We find that including transparency features in an AutoML tool increased user trust and understandability in the tool; and out of all proposed features, model performance metrics and visualizations are the most important information to data scientists when establishing their trust with an AutoML tool.
Description
Full Citation
Jaimie Drozdal, Justin Weisz, Dakuo Wang, Gaurav Dass, Bingsheng Yao, Changruo Zhao, Michael Muller, Lin Ju, and Hui Su. 2020. Trust in AutoML: exploring information needs for establishing trust in automated machine learning systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces (IUI '20). Association for Computing Machinery, New York, NY, USA, 297–307. https://doi.org/10.1145/3377325.3377501
Publisher
ACM
Journal
Volume
Issue
PubMed ID
DOI
ISSN
EISSN