arXiv Analytics

Sign in

arXiv:1812.04558 [cs.CV]AbstractReferencesReviewsResources

Grounded Human-Object Interaction Hotspots from Video

Tushar Nagarajan, Christoph Feichtenhofer, Kristen Grauman

Published 2018-12-11Version 1

Learning how to interact with objects is an important step towards embodied visual intelligence, but existing techniques suffer from heavy supervision or sensing requirements. We propose an approach to learn human-object interaction "hotspots" directly from video. Rather than treat affordances as a manually supervised semantic segmentation task, our approach learns about interactions by watching videos of real human behavior and recognizing afforded actions. Given a novel image or video, our model infers a spatial hotspot map indicating how an object would be manipulated in a potential interaction -- even if the object is currently at rest. Through results with both first and third person video, we show the value of grounding affordance maps in real human-object interactions. Not only are our weakly supervised grounded hotspots competitive with strongly supervised affordance methods, but they can also anticipate object function for novel objects and enhance object recognition.

Related articles:
arXiv:1906.01963 [cs.CV] (Published 2019-06-03)
Grounded Human-Object Interaction Hotspots from Video (Extended Abstract)
arXiv:2003.04671 [cs.CV] (Published 2020-03-10)
Realizing Pixel-Level Semantic Learning in Complex Driving Scenes based on Only One Annotated Pixel per Class
arXiv:2209.15211 [cs.CV] (Published 2022-09-30)
Dual Progressive Transformations for Weakly Supervised Semantic Segmentation