arXiv Analytics

Sign in

arXiv:1703.06857 [cs.CV]AbstractReferencesReviewsResources

Deep Neural Networks Do Not Recognize Negative Images

Hossein Hosseini, Radha Poovendran

Published 2017-03-20Version 1

Deep Neural Networks (DNNs) have achieved remarkable performance on a variety of pattern-recognition tasks, particularly visual classification problems, where new algorithms reported to achieve or even surpass the human performance. In this paper, we test the state-of-the-art DNNs with negative images and show that the accuracy drops to the level of random classification. This leads us to the conjecture that the DNNs, which are merely trained on raw data, do not recognize the semantics of the objects, but rather memorize the inputs. We suggest that negative images can be thought as "semantic adversarial examples", which we define as transformed inputs that semantically represent the same objects, but the model does not classify them correctly.

Related articles: Most relevant | Search more
arXiv:1605.02699 [cs.CV] (Published 2016-05-09)
A Theoretical Analysis of Deep Neural Networks for Texture Classification
arXiv:1502.02445 [cs.CV] (Published 2015-02-09)
Deep Neural Networks for Anatomical Brain Segmentation
arXiv:1312.2249 [cs.CV] (Published 2013-12-08)
Scalable Object Detection using Deep Neural Networks