arXiv Analytics

Sign in

arXiv:1902.09513 [cs.CV]AbstractReferencesReviewsResources

FEELVOS: Fast End-to-End Embedding Learning for Video Object Segmentation

Paul Voigtlaender, Yuning Chai, Florian Schroff, Hartwig Adam, Bastian Leibe, Liang-Chieh Chen

Published 2019-02-25Version 1

Many of the recent successful methods for video object segmentation (VOS) are overly complicated, heavily rely on fine-tuning on the first frame, and/or are slow, and are hence of limited practical use. In this work, we propose FEELVOS as a simple and fast method which does not rely on fine-tuning. In order to segment a video, for each frame FEELVOS uses a semantic pixel-wise embedding together with a global and a local matching mechanism to transfer information from the first frame and from the previous frame of the video to the current frame. In contrast to previous work, our embedding is only used as an internal guidance of a convolutional network. Our novel dynamic segmentation head allows us to train the network, including the embedding, end-to-end for the multiple object segmentation task with a cross entropy loss. We achieve a new state of the art in video object segmentation without fine-tuning on the DAVIS 2017 validation set with a J&F measure of 69.1%.

Related articles: Most relevant | Search more
arXiv:1910.00032 [cs.CV] (Published 2019-09-30)
LIP: Learning Instance Propagation for Video Object Segmentation
arXiv:2003.00908 [cs.CV] (Published 2020-02-27)
Learning Fast and Robust Target Models for Video Object Segmentation
arXiv:2311.04414 [cs.CV] (Published 2023-11-08)
Learning the What and How of Annotation in Video Object Segmentation