arXiv Analytics

Sign in

arXiv:2303.14644 [cs.CV]AbstractReferencesReviewsResources

Affordance Grounding from Demonstration Video to Target Image

Joya Chen, Difei Gao, Kevin Qinghong Lin, Mike Zheng Shou

Published 2023-03-26Version 1

Humans excel at learning from expert demonstrations and solving their own problems. To equip intelligent robots and assistants, such as AR glasses, with this ability, it is essential to ground human hand interactions (i.e., affordances) from demonstration videos and apply them to a target image like a user's AR glass view. The video-to-image affordance grounding task is challenging due to (1) the need to predict fine-grained affordances, and (2) the limited training data, which inadequately covers video-image discrepancies and negatively impacts grounding. To tackle them, we propose Affordance Transformer (Afformer), which has a fine-grained transformer-based decoder that gradually refines affordance grounding. Moreover, we introduce Mask Affordance Hand (MaskAHand), a self-supervised pre-training technique for synthesizing video-image data and simulating context changes, enhancing affordance grounding across video-image discrepancies. Afformer with MaskAHand pre-training achieves state-of-the-art performance on multiple benchmarks, including a substantial 37% improvement on the OPRA dataset. Code is made available at https://github.com/showlab/afformer.

Related articles: Most relevant | Search more
arXiv:1812.00893 [cs.CV] (Published 2018-12-03)
Domain Alignment with Triplets
arXiv:2108.05675 [cs.CV] (Published 2021-08-12)
Learning Visual Affordance Grounding from Demonstration Videos
arXiv:2406.18586 [cs.CV] (Published 2024-06-06)
Cut-and-Paste with Precision: a Content and Perspective-aware Data Augmentation for Road Damage Detection