arXiv Analytics

Sign in

arXiv:1907.04135 [cs.LG]AbstractReferencesReviewsResources

The What-If Tool: Interactive Probing of Machine Learning Models

James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viegas, Jimbo Wilson

Published 2019-07-09Version 1

A key challenge in developing and deploying Machine Learning (ML) systems is understanding their performance across a wide range of inputs. To address this challenge, we created the What-If Tool, an open-source application that allows practitioners to probe, visualize, and analyze ML systems, with minimal coding. The What-If Tool lets practitioners test performance in hypothetical situations, analyze the importance of different data features, and visualize model behavior across multiple models and subsets of input data. It also lets practitioners measure systems according to multiple ML fairness metrics. We describe the design of the tool, and report on real-life usage at different organizations.

Comments: 10 pages, 6 figures, 2 tables. To be presented at IEEE VAST 2019
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1908.02781 [cs.LG] (Published 2019-08-07)
Flood Prediction Using Machine Learning Models: Literature Review
arXiv:2001.11757 [cs.LG] (Published 2020-01-31)
Statistical stability indices for LIME: obtaining reliable explanations for Machine Learning models
arXiv:1911.03030 [cs.LG] (Published 2019-11-08)
Certified Data Removal from Machine Learning Models