arXiv Analytics

Sign in

arXiv:1905.06316 [cs.CL]AbstractReferencesReviewsResources

What do you learn from context? Probing for sentence structure in contextualized word representations

Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, Ellie Pavlick

Published 2019-05-15Version 1

Contextualized representation models such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks. Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline. We probe word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena. We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline.

Related articles:
arXiv:1902.02169 [cs.CL] (Published 2019-01-31)
Learning Taxonomies of Concepts and not Words using Contextualized Word Representations: A Position Paper
arXiv:1808.07244 [cs.CL] (Published 2018-08-22)
Improving Matching Models with Contextualized Word Representations for Multi-turn Response Selection in Retrieval-based Chatbots
arXiv:1901.05816 [cs.CL] (Published 2019-01-17)
Robust Chinese Word Segmentation with Contextualized Word Representations