{ "id": "1912.02783", "version": "v1", "published": "2019-12-05T18:20:31.000Z", "updated": "2019-12-05T18:20:31.000Z", "title": "Self-Supervised Learning of Video-Induced Visual Invariances", "authors": [ "Michael Tschannen", "Josip Djolonga", "Marvin Ritter", "Aravindh Mahendran", "Neil Houlsby", "Sylvain Gelly", "Mario Lucic" ], "categories": [ "cs.CV", "cs.LG" ], "abstract": "We propose a general framework for self-supervised learning of transferable visual representations based on video-induced visual invariances (VIVI). We consider the implicit hierarchy present in the videos and make use of (i) frame-level invariances (e.g. stability to color and contrast perturbations), (ii) shot/clip-level invariances (e.g. robustness to changes in object orientation and lighting conditions), and (iii) video-level invariances (semantic relationships of scenes across shots/clips), to define a holistic self-supervised loss. Training models using different variants of the proposed framework on videos from the YouTube-8M data set, we obtain state-of-the-art self-supervised transfer learning results on the 19 diverse downstream tasks of the Visual Task Adaptation Benchmark (VTAB), using only 1000 labels per task. We then show how to co-train our models jointly with labeled images, outperforming an ImageNet-pretrained ResNet-50 by 0.8 points with 10x fewer labeled images, as well as the previous best supervised model by 3.7 points using the full ImageNet data set.", "revisions": [ { "version": "v1", "updated": "2019-12-05T18:20:31.000Z" } ], "analyses": { "keywords": [ "video-induced visual invariances", "self-supervised learning", "visual task adaptation benchmark", "full imagenet data set", "diverse downstream tasks" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }