arXiv Analytics

Sign in

arXiv:2201.12813 [cs.CV]AbstractReferencesReviewsResources

Contrastive Learning from Demonstrations

André Correia, Luís A. Alexandre

Published 2022-01-30Version 1

This paper presents a framework for learning visual representations from unlabeled video demonstrations captured from multiple viewpoints. We show that these representations are applicable for imitating several robotic tasks, including pick and place. We optimize a recently proposed self-supervised learning algorithm by applying contrastive learning to enhance task-relevant information while suppressing irrelevant information in the feature embeddings. We validate the proposed method on the publicly available Multi-View Pouring and a custom Pick and Place data sets and compare it with the TCN triplet baseline. We evaluate the learned representations using three metrics: viewpoint alignment, stage classification and reinforcement learning, and in all cases the results improve when compared to state-of-the-art approaches, with the added benefit of reduced number of training iterations.

Related articles: Most relevant | Search more
arXiv:2106.09958 [cs.CV] (Published 2021-06-18)
Novelty Detection via Contrastive Learning with Negative Data Augmentation
arXiv:2203.12230 [cs.CV] (Published 2022-03-23)
Negative Selection by Clustering for Contrastive Learning in Human Activity Recognition
arXiv:2204.11018 [cs.CV] (Published 2022-04-23)
Exploring Negatives in Contrastive Learning for Unpaired Image-to-Image Translation