arXiv Analytics

Sign in

arXiv:1903.10869 [cs.CV]AbstractReferencesReviewsResources

V2CNet: A Deep Learning Framework to Translate Videos to Commands for Robotic Manipulation

Anh Nguyen, Thanh-Toan Do, Ian Reid, Darwin G. Caldwell, Nikos G. Tsagarakis

Published 2019-03-23Version 1

We propose V2CNet, a new deep learning framework to automatically translate the demonstration videos to commands that can be directly used in robotic applications. Our V2CNet has two branches and aims at understanding the demonstration video in a fine-grained manner. The first branch has the encoder-decoder architecture to encode the visual features and sequentially generate the output words as a command, while the second branch uses a Temporal Convolutional Network (TCN) to learn the fine-grained actions. By jointly training both branches, the network is able to model the sequential information of the command, while effectively encodes the fine-grained actions. The experimental results on our new large-scale dataset show that V2CNet outperforms recent state-of-the-art methods by a substantial margin, while its output can be applied in real robotic applications. The source code and trained models will be made available.

Comments: 15 pages. arXiv admin note: substantial text overlap with arXiv:1710.00290
Categories: cs.CV, cs.RO
Related articles: Most relevant | Search more
arXiv:1903.01214 [cs.CV] (Published 2019-03-04)
Understanding the Mechanism of Deep Learning Framework for Lesion Detection in Pathological Images with Breast Cancer
Wei-Wen Hsu et al.
arXiv:2203.12482 [cs.CV] (Published 2022-03-23)
A Deep Learning Framework to Reconstruct Face under Mask
arXiv:1704.05708 [cs.CV] (Published 2017-04-19)
A Deep Learning Framework using Passive WiFi Sensing for Respiration Monitoring