arXiv Analytics

Sign in

arXiv:2212.02774 [cs.CV]AbstractReferencesReviewsResources

Adaptive Testing of Computer Vision Models

Irena Gao, Gabriel Ilharco, Scott Lundberg, Marco Tulio Ribeiro

Published 2022-12-06Version 1

Vision models often fail systematically on groups of data that share common semantic characteristics (e.g., rare objects or unusual scenes), but identifying these failure modes is a challenge. We introduce AdaVision, an interactive process for testing vision models which helps users identify and fix coherent failure modes. Given a natural language description of a coherent group, AdaVision retrieves relevant images from LAION-5B with CLIP. The user then labels a small amount of data for model correctness, which is used in successive retrieval rounds to hill-climb towards high-error regions, refining the group definition. Once a group is saturated, AdaVision uses GPT-3 to suggest new group descriptions for the user to explore. We demonstrate the usefulness and generality of AdaVision in user studies, where users find major bugs in state-of-the-art classification, object detection, and image captioning models. These user-discovered groups have failure rates 2-3x higher than those surfaced by automatic error clustering methods. Finally, finetuning on examples found with AdaVision fixes the discovered bugs when evaluated on unseen examples, without degrading in-distribution accuracy, and while also improving performance on out-of-distribution datasets.

Related articles: Most relevant | Search more
arXiv:2301.13514 [cs.CV] (Published 2023-01-31)
Fourier Sensitivity and Regularization of Computer Vision Models
arXiv:2005.10430 [cs.CV] (Published 2020-05-21)
Gender Slopes: Counterfactual Fairness for Computer Vision Models by Attribute Manipulation
arXiv:2211.13644 [cs.CV] (Published 2022-11-24)
Seeds Don't Lie: An Adaptive Watermarking Framework for Computer Vision Models