arXiv Analytics

Sign in

arXiv:2301.00234 [cs.CL]AbstractReferencesReviewsResources

A Survey for In-context Learning

Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, Zhifang Sui

Published 2022-12-31Version 1

With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few training examples. It has been a new trend exploring ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress, challenges, and future work in ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques of ICL, including training strategies, prompting strategies, and so on. Finally, we present the challenges of ICL and provide potential directions for further research. We hope our work can encourage more research on uncovering how ICL works and improving ICL in future work.

Related articles: Most relevant | Search more
arXiv:2307.10169 [cs.CL] (Published 2023-07-19)
Challenges and Applications of Large Language Models
arXiv:2311.13857 [cs.CL] (Published 2023-11-23)
Challenges of Large Language Models for Mental Health Counseling
arXiv:2308.07633 [cs.CL] (Published 2023-08-15)
A Survey on Model Compression for Large Language Models