arXiv Analytics

Sign in

arXiv:2105.09270 [cs.CV]AbstractReferencesReviewsResources

Do We Really Need to Learn Representations from In-domain Data for Outlier Detection?

Zhisheng Xiao, Qing Yan, Yali Amit

Published 2021-05-19Version 1

Unsupervised outlier detection, which predicts if a test sample is an outlier or not using only the information from unlabelled inlier data, is an important but challenging task. Recently, methods based on the two-stage framework achieve state-of-the-art performance on this task. The framework leverages self-supervised representation learning algorithms to train a feature extractor on inlier data, and applies a simple outlier detector in the feature space. In this paper, we explore the possibility of avoiding the high cost of training a distinct representation for each outlier detection task, and instead using a single pre-trained network as the universal feature extractor regardless of the source of in-domain data. In particular, we replace the task-specific feature extractor by one network pre-trained on ImageNet with a self-supervised loss. In experiments, we demonstrate competitive or better performance on a variety of outlier detection benchmarks compared with previous two-stage methods, suggesting that learning representations from in-domain data may be unnecessary for outlier detection.

Related articles: Most relevant | Search more
arXiv:2402.15374 [cs.CV] (Published 2024-02-23, updated 2024-06-10)
Outlier detection by ensembling uncertainty with negative objectness
arXiv:2104.13614 [cs.CV] (Published 2021-04-28)
Preserving Earlier Knowledge in Continual Learning with the Help of All Previous Feature Extractors
arXiv:2310.06085 [cs.CV] (Published 2023-08-20)
Quantile-based Maximum Likelihood Training for Outlier Detection