arXiv:2012.05152 [cs.LG]AbstractReferencesReviewsResources
Binding and Perspective Taking as Inference in a Generative Neural Network Model
Mahdi Sadeghi, Fabian Schrodt, Sebastian Otte, Martin V. Butz
Published 2020-12-09Version 1
The ability to flexibly bind features into coherent wholes from different perspectives is a hallmark of cognition and intelligence. Importantly, the binding problem is not only relevant for vision but also for general intelligence, sensorimotor integration, event processing, and language. Various artificial neural network models have tackled this problem with dynamic neural fields and related approaches. Here we focus on a generative encoder-decoder architecture that adapts its perspective and binds features by means of retrospective inference. We first train a model to learn sufficiently accurate generative models of dynamic biological motion or other harmonic motion patterns, such as a pendulum. We then scramble the input to a certain extent, possibly vary the perspective onto it, and propagate the prediction error back onto a binding matrix, that is, hidden neural states that determine feature binding. Moreover, we propagate the error further back onto perspective taking neurons, which rotate and translate the input features onto a known frame of reference. Evaluations show that the resulting gradient-based inference process solves the perspective taking and binding problem for known biological motion patterns, essentially yielding a Gestalt perception mechanism. In addition, redundant feature properties and population encodings are shown to be highly useful. While we evaluate the algorithm on biological motion patterns, the principled approach should be applicable to binding and Gestalt perception problems in other domains.