arXiv Analytics

Sign in

arXiv:2308.08614 [cs.LG]AbstractReferencesReviewsResources

Boosting Logical Reasoning in Large Language Models through a New Framework: The Graph of Thought

Bin Lei, pei-Hung Lin, Chunhua Liao, Caiwen Ding

Published 2023-08-16Version 1

Recent advancements in large-scale models, such as GPT-4, have showcased remarkable capabilities in addressing standard queries. However, when facing complex problems that require multi-step logical reasoning, their accuracy dramatically decreases. Current research has explored the realm of \textit{prompting engineering} to bolster the inferential capacities of these models. Our paper unveils a pioneering prompting technique, dubbed \textit{Graph of Thoughts (GoT)}. Through testing on a trio of escalating challenges: the 24-point game, resolution of high-degree polynomial equations, and derivation of formulas for recursive sequences, our method outperformed GPT-4, achieving accuracy improvements of $89.7\%$, $86\%$, and $56\%$ for each respective task. Moreover, when juxtaposed with the state-of-the-art (SOTA) prompting method, \textit{Tree of Thought (ToT)}, our approach registered an average accuracy boost of $23\%$, $24\%$, and $15\%$.

Related articles: Most relevant | Search more
arXiv:2302.06692 [cs.LG] (Published 2023-02-13)
Guiding Pretraining in Reinforcement Learning with Large Language Models
Yuqing Du et al.
arXiv:2309.02784 [cs.LG] (Published 2023-09-06)
Norm Tweaking: High-performance Low-bit Quantization of Large Language Models
arXiv:2211.01910 [cs.LG] (Published 2022-11-03)
Large Language Models Are Human-Level Prompt Engineers