arXiv Analytics

Sign in

arXiv:1906.12039 [cs.CL]AbstractReferencesReviewsResources

Supervised Contextual Embeddings for Transfer Learning in Natural Language Processing Tasks

Mihir Kale, Aditya Siddhant, Sreyashi Nag, Radhika Parik, Matthias Grabmair, Anthony Tomasic

Published 2019-06-28Version 1

Pre-trained word embeddings are the primary method for transfer learning in several Natural Language Processing (NLP) tasks. Recent works have focused on using unsupervised techniques such as language modeling to obtain these embeddings. In contrast, this work focuses on extracting representations from multiple pre-trained supervised models, which enriches word embeddings with task and domain specific knowledge. Experiments performed in cross-task, cross-domain and cross-lingual settings indicate that such supervised embeddings are helpful, especially in the low-resource setting, but the extent of gains is dependent on the nature of the task and domain. We make our code publicly available.

Comments: Appeared in 2nd Learning from Limited Labeled Data (LLD) Workshop at ICLR 2019
Categories: cs.CL, cs.LG
Related articles: Most relevant | Search more
arXiv:1910.07370 [cs.CL] (Published 2019-10-16)
Evolution of transfer learning in natural language processing
arXiv:2204.09593 [cs.CL] (Published 2022-04-01)
COOL, a Context Outlooker, and its Application to Question Answering and other Natural Language Processing Tasks
arXiv:2102.08655 [cs.CL] (Published 2021-02-17)
Decoding EEG Brain Activity for Multi-Modal Natural Language Processing