{ "id": "1703.06857", "version": "v1", "published": "2017-03-20T17:21:19.000Z", "updated": "2017-03-20T17:21:19.000Z", "title": "Deep Neural Networks Do Not Recognize Negative Images", "authors": [ "Hossein Hosseini", "Radha Poovendran" ], "categories": [ "cs.CV", "cs.LG", "stat.ML" ], "abstract": "Deep Neural Networks (DNNs) have achieved remarkable performance on a variety of pattern-recognition tasks, particularly visual classification problems, where new algorithms reported to achieve or even surpass the human performance. In this paper, we test the state-of-the-art DNNs with negative images and show that the accuracy drops to the level of random classification. This leads us to the conjecture that the DNNs, which are merely trained on raw data, do not recognize the semantics of the objects, but rather memorize the inputs. We suggest that negative images can be thought as \"semantic adversarial examples\", which we define as transformed inputs that semantically represent the same objects, but the model does not classify them correctly.", "revisions": [ { "version": "v1", "updated": "2017-03-20T17:21:19.000Z" } ], "analyses": { "keywords": [ "deep neural networks", "recognize negative images", "particularly visual classification problems", "semantic adversarial examples", "pattern-recognition tasks" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }