{ "id": "2108.05675", "version": "v1", "published": "2021-08-12T11:45:38.000Z", "updated": "2021-08-12T11:45:38.000Z", "title": "Learning Visual Affordance Grounding from Demonstration Videos", "authors": [ "Hongchen Luo", "Wei Zhai", "Jing Zhang", "Yang Cao", "Dacheng Tao" ], "categories": [ "cs.CV" ], "abstract": "Visual affordance grounding aims to segment all possible interaction regions between people and objects from an image/video, which is beneficial for many applications, such as robot grasping and action recognition. However, existing methods mainly rely on the appearance feature of the objects to segment each region of the image, which face the following two problems: (i) there are multiple possible regions in an object that people interact with; and (ii) there are multiple possible human interactions in the same object region. To address these problems, we propose a Hand-aided Affordance Grounding Network (HAGNet) that leverages the aided clues provided by the position and action of the hand in demonstration videos to eliminate the multiple possibilities and better locate the interaction regions in the object. Specifically, HAG-Net has a dual-branch structure to process the demonstration video and object image. For the video branch, we introduce hand-aided attention to enhance the region around the hand in each video frame and then use the LSTM network to aggregate the action features. For the object branch, we introduce a semantic enhancement module (SEM) to make the network focus on different parts of the object according to the action classes and utilize a distillation loss to align the output features of the object branch with that of the video branch and transfer the knowledge in the video branch to the object branch. Quantitative and qualitative evaluations on two challenging datasets show that our method has achieved stateof-the-art results for affordance grounding. The source code will be made available to the public.", "revisions": [ { "version": "v1", "updated": "2021-08-12T11:45:38.000Z" } ], "analyses": { "keywords": [ "learning visual affordance grounding", "demonstration video", "video branch", "object branch", "interaction regions" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }