Model LineUpper: Supporting Interactive Model Comparison at Multiple Levels for AutoML
AuthorNarkar, S.; Zhang, Y.; Liao, Q. V.; Wang, D.; Weisz, J. D.
Full CitationNarkar, S., Zhang, Y., Liao, Q. V., Wang, D., & Weisz, J. D. (2021, April). Model LineUpper: Supporting Interactive Model Comparison at Multiple Levels for AutoML. In 26th International Conference on Intelligent User Interfaces (pp. 170-174). DOI: 10.1145/3397481.3450658. April 2021.
MetadataShow full item record
URIhttp://doi.org/10.1145/3397481.3450658; https://dl.acm.org/doi/10.1145/3397481.3450658; https://hdl.handle.net/20.500.13015/6502
AbstractAutomated Machine Learning (AutoML) is a rapidly growing set of technologies that automate the model development pipeline by searching model space and generating candidate models. A critical, final step of AutoML is human selection of a final model from dozens of candidates. In current AutoML systems, selection is supported only by performance metrics. Prior work has shown that in practice, people evaluate ML models based on additional criteria, such as the way a model makes predictions. Comparison may happen at multiple levels, from types of errors, to feature importance, to how the model makes predictions of specific instances. We developed Model LineUpper to support interactive model comparison for AutoML by integrating multiple Explainable AI (XAI) and visualization techniques. We conducted a user study in which we both evaluated the system and used it as a technology probe to understand how users perform model comparison in an AutoML system. We discuss design implications for utilizing XAI techniques for model comparison and supporting the unique needs of data scientists in comparing AutoML models.;
The following license files are associated with this item: