arXiv Analytics

Sign in

arXiv:2407.20311 [cs.AI]AbstractReferencesReviewsResources

Physics of Language Models: Part 2.1, Grade-School Math and the Hidden Reasoning Process

Tian Ye, Zicheng Xu, Yuanzhi Li, Zeyuan Allen-Zhu

Published 2024-07-29Version 1

Recent advances in language models have demonstrated their capability to solve mathematical reasoning problems, achieving near-perfect accuracy on grade-school level math benchmarks like GSM8K. In this paper, we formally study how language models solve these problems. We design a series of controlled experiments to address several fundamental questions: (1) Can language models truly develop reasoning skills, or do they simply memorize templates? (2) What is the model's hidden (mental) reasoning process? (3) Do models solve math questions using skills similar to or different from humans? (4) Do models trained on GSM8K-like datasets develop reasoning skills beyond those necessary for solving GSM8K problems? (5) What mental process causes models to make reasoning mistakes? (6) How large or deep must a model be to effectively solve GSM8K-level math questions? Our study uncovers many hidden mechanisms by which language models solve mathematical questions, providing insights that extend beyond current understandings of LLMs.

Comments: video appeared in ICML 2024 tutorial
Categories: cs.AI, cs.CL, cs.LG
Related articles: Most relevant | Search more
arXiv:2305.03742 [cs.AI] (Published 2023-05-05)
Improved Logical Reasoning of Language Models via Differentiable Symbolic Programming
arXiv:2209.00465 [cs.AI] (Published 2022-08-29)
On Grounded Planning for Embodied Tasks with Language Models
arXiv:2407.14414 [cs.AI] (Published 2024-07-19)
System-1.x: Learning to Balance Fast and Slow Planning with Language Models