arXiv Analytics

Sign in

arXiv:2305.17256 [cs.CL]AbstractReferencesReviewsResources

Large Language Models Can be Lazy Learners: Analyze Shortcuts in In-Context Learning

Ruixiang Tang, Dehan Kong, Longtao Huang, Hui Xue

Published 2023-05-26Version 1

Large language models (LLMs) have recently shown great potential for in-context learning, where LLMs learn a new task simply by conditioning on a few input-label pairs (prompts). Despite their potential, our understanding of the factors influencing end-task performance and the robustness of in-context learning remains limited. This paper aims to bridge this knowledge gap by investigating the reliance of LLMs on shortcuts or spurious correlations within prompts. Through comprehensive experiments on classification and extraction tasks, we reveal that LLMs are "lazy learners" that tend to exploit shortcuts in prompts for downstream tasks. Additionally, we uncover a surprising finding that larger models are more likely to utilize shortcuts in prompts during inference. Our findings provide a new perspective on evaluating robustness in in-context learning and pose new challenges for detecting and mitigating the use of shortcuts in prompts.

Related articles: Most relevant | Search more
arXiv:2305.12766 [cs.CL] (Published 2023-05-22)
In-Context Learning of Large Language Models Explained as Kernel Regression
arXiv:2305.14264 [cs.CL] (Published 2023-05-23, updated 2023-11-22)
Active Learning Principles for In-Context Learning with Large Language Models
arXiv:2402.10189 [cs.CL] (Published 2024-02-15, updated 2024-03-28)
Uncertainty Quantification for In-Context Learning of Large Language Models
Chen Ling et al.