arXiv:1412.7122 [cs.CV]AbstractReferencesReviewsResources
Exploring Invariances in Deep Convolutional Neural Networks Using Synthetic Images
Xingchao Peng, Baochen Sun, Karim Ali, Kate Saenko
Published 2014-12-22Version 1
Deep convolutional neural networks learn extremely powerful image representations, yet most of that power is hidden in the millions of deep-layer parameters. What exactly do these parameters represent? Recent work has started to analyse CNN representations, finding that, e.g., they are invariant to some 2D transformations, but are confused by particular types of image noise. In this paper, we delve deeper and ask: how invariant are CNNs to object-class variations caused by 3D shape, pose, and photorealism? These invariance properties are difficult to analyse using traditional data, so we propose an approach that renders synthetic data from freely available 3D CAD models. Using our approach we can easily generate an infinite amount of training images for almost any object. We explore the invariance of CNNs to various intra-class variations by simulating different rendering conditions, with surprising findings. Based on these results, we propose an optimal synthetic data generation strategy for training object detectors from CAD models. We show that our Virtual CNN approach significantly outperforms previous methods for learning object detectors from synthetic data on the benchmark PASCAL VOC2007 dataset.