arXiv Analytics

Sign in

arXiv:2209.08141 [cs.CL]AbstractReferencesReviewsResources

Psychologically-informed chain-of-thought prompts for metaphor understanding in large language models

Ben Prystawski, Paul Thibodeau, Noah Goodman

Published 2022-09-16Version 1

Probabilistic models of language understanding are interpretable and structured, for instance models of metaphor understanding describe inference about latent topics and features. However, these models are manually designed for a specific task. Large language models (LLMs) can perform many tasks through in-context learning, but they lack the clear structure of probabilistic models. In this paper, we use chain-of-thought prompts to introduce structures from probabilistic models into LLMs. These prompts lead the model to infer latent variables and reason about their relationships to choose appropriate paraphrases for metaphors. The latent variables and relationships chosen are informed by theories of metaphor understanding from cognitive psychology. We apply these prompts to the two largest versions of GPT-3 and show that they can improve paraphrase selection.

Related articles: Most relevant | Search more
arXiv:2202.00828 [cs.CL] (Published 2022-02-02)
Co-training Improves Prompt-based Learning for Large Language Models
arXiv:2205.08184 [cs.CL] (Published 2022-05-17)
SKILL: Structured Knowledge Infusion for Large Language Models
arXiv:2211.05110 [cs.CL] (Published 2022-11-09)
Large Language Models with Controllable Working Memory
Daliang Li et al.