arXiv Analytics

Sign in

arXiv:2301.11916 [cs.CL]AbstractReferencesReviewsResources

Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Learning

Xinyi Wang, Wanrong Zhu, William Yang Wang

Published 2023-01-27Version 1

In recent years, pre-trained large language models have demonstrated remarkable efficiency in achieving an inference-time few-shot learning capability known as in-context learning. However, existing literature has highlighted the sensitivity of this capability to the selection of few-shot demonstrations. The underlying mechanisms by which this capability arises from regular language model pretraining objectives remain poorly understood. In this study, we aim to examine the in-context learning phenomenon through a Bayesian lens, viewing large language models as topic models that implicitly infer task-related information from demonstrations. On this premise, we propose an algorithm for selecting optimal demonstrations from a set of annotated data and demonstrate a significant 12.5% improvement relative to the random selection baseline, averaged over eight GPT2 and GPT3 models on eight different real-world text classification datasets. Our empirical findings support our hypothesis that large language models implicitly infer a latent concept variable.

Related articles: Most relevant | Search more
arXiv:2402.10189 [cs.CL] (Published 2024-02-15, updated 2024-03-28)
Uncertainty Quantification for In-Context Learning of Large Language Models
Chen Ling et al.
arXiv:2305.12766 [cs.CL] (Published 2023-05-22)
In-Context Learning of Large Language Models Explained as Kernel Regression
arXiv:2305.14264 [cs.CL] (Published 2023-05-23, updated 2023-11-22)
Active Learning Principles for In-Context Learning with Large Language Models