arXiv Analytics

Sign in

arXiv:2106.09171 [cs.LG]AbstractReferencesReviewsResources

LiRA: Learning Visual Speech Representations from Audio through Self-supervision

Pingchuan Ma, Rodrigo Mira, Stavros Petridis, Björn W. Schuller, Maja Pantic

Published 2021-06-16Version 1

The large amount of audiovisual content being shared online today has drawn substantial attention to the prospect of audiovisual self-supervised learning. Recent works have focused on each of these modalities separately, while others have attempted to model both simultaneously in a cross-modal fashion. However, comparatively little attention has been given to leveraging one modality as a training objective to learn from the other. In this work, we propose Learning visual speech Representations from Audio via self-supervision (LiRA). Specifically, we train a ResNet+Conformer model to predict acoustic features from unlabelled visual speech. We find that this pre-trained model can be leveraged towards word-level and sentence-level lip-reading through feature extraction and fine-tuning experiments. We show that our approach significantly outperforms other self-supervised methods on the Lip Reading in the Wild (LRW) dataset and achieves state-of-the-art performance on Lip Reading Sentences 2 (LRS2) using only a fraction of the total labelled data.

Related articles: Most relevant | Search more
arXiv:1612.07307 [cs.LG] (Published 2016-12-21)
Loss is its own Reward: Self-Supervision for Reinforcement Learning
arXiv:1909.11825 [cs.LG] (Published 2019-09-26)
Unsupervised Domain Adaptation through Self-Supervision
arXiv:2301.10127 [cs.LG] (Published 2023-01-24)
Improving Open-Set Semi-Supervised Learning with Self-Supervision