arXiv Analytics

Sign in

arXiv:2006.16241 [cs.CV]AbstractReferencesReviewsResources

The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization

Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, Dawn Song, Jacob Steinhardt, Justin Gilmer

Published 2020-06-29Version 1

We introduce three new robustness benchmarks consisting of naturally occurring distribution changes in image style, geographic location, camera operation, and more. Using our benchmarks, we take stock of previously proposed hypotheses for out-of-distribution robustness and put them to the test. We find that using larger models and synthetic data augmentation can improve robustness on real-world distribution shifts, contrary to claims in prior work. Motivated by this, we introduce a new data augmentation method which advances the state-of-the-art and outperforms models pretrained with 1000x more labeled data. We find that some methods consistently help with distribution shifts in texture and local image statistics, but these methods do not help with some other distribution shifts like geographic changes. We conclude that future research must study multiple distribution shifts simultaneously.

Comments: Datasets, code, and models available at https://github.com/hendrycks/imagenet-r
Categories: cs.CV, cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:2208.10722 [cs.CV] (Published 2022-08-23)
Bag of Tricks for Out-of-Distribution Generalization
arXiv:1907.12739 [cs.CV] (Published 2019-07-12)
Deep Learning For Face Recognition: A Critical Analysis
arXiv:2208.03462 [cs.CV] (Published 2022-08-06)
Class Is Invariant to Context and Vice Versa: On Learning Invariance for Out-Of-Distribution Generalization