arXiv Analytics

Sign in

arXiv:2310.06085 [cs.CV]AbstractReferencesReviewsResources

Quantile-based Maximum Likelihood Training for Outlier Detection

Masoud Taghikhah, Nishant Kumar, Siniša Šegvić, Abouzar Eslami, Stefan Gumhold

Published 2023-08-20Version 1

Discriminative learning effectively predicts true object class for image classification. However, it often results in false positives for outliers, posing critical concerns in applications like autonomous driving and video surveillance systems. Previous attempts to address this challenge involved training image classifiers through contrastive learning using actual outlier data or synthesizing outliers for self-supervised learning. Furthermore, unsupervised generative modeling of inliers in pixel space has shown limited success for outlier detection. In this work, we introduce a quantile-based maximum likelihood objective for learning the inlier distribution to improve the outlier separation during inference. Our approach fits a normalizing flow to pre-trained discriminative features and detects the outliers according to the evaluated log-likelihood. The experimental evaluation demonstrates the effectiveness of our method as it surpasses the performance of the state-of-the-art unsupervised methods for outlier detection. The results are also competitive compared with a recent self-supervised approach for outlier detection. Our work allows to reduce dependency on well-sampled negative training data, which is especially important for domains like medical diagnostics or remote sensing.

Related articles: Most relevant | Search more
arXiv:2105.09270 [cs.CV] (Published 2021-05-19)
Do We Really Need to Learn Representations from In-domain Data for Outlier Detection?
arXiv:2402.15374 [cs.CV] (Published 2024-02-23, updated 2024-06-10)
Outlier detection by ensembling uncertainty with negative objectness
arXiv:1411.6850 [cs.CV] (Published 2014-11-25)
Similarity- based approach for outlier detection