arXiv Analytics

Sign in

arXiv:2410.13086 [cs.CL]AbstractReferencesReviewsResources

Reverse-Engineering the Reader

Samuel Kiegeland, Ethan Gotlieb Wilcox, Afra Amini, David Robert Reich, Ryan Cotterell

Published 2024-10-16Version 1

Numerous previous studies have sought to determine to what extent language models, pretrained on natural language text, can serve as useful models of human cognition. In this paper, we are interested in the opposite question: whether we can directly optimize a language model to be a useful cognitive model by aligning it to human psychometric data. To achieve this, we introduce a novel alignment technique in which we fine-tune a language model to implicitly optimize the parameters of a linear regressor that directly predicts humans' reading times of in-context linguistic units, e.g., phonemes, morphemes, or words, using surprisal estimates derived from the language model. Using words as a test case, we evaluate our technique across multiple model sizes and datasets and find that it improves language models' psychometric predictive power. However, we find an inverse relationship between psychometric power and a model's performance on downstream NLP tasks as well as its perplexity on held-out test data. While this latter trend has been observed before (Oh et al., 2022; Shain et al., 2024), we are the first to induce it by manipulating a model's alignment to psychometric data.

Related articles: Most relevant | Search more
arXiv:2007.14071 [cs.CL] (Published 2020-07-28)
Emotion Correlation Mining Through Deep Learning Models on Natural Language Text
arXiv:2305.03960 [cs.CL] (Published 2023-05-06)
Beyond Rule-based Named Entity Recognition and Relation Extraction for Process Model Generation from Natural Language Text
arXiv:1807.01763 [cs.CL] (Published 2018-07-04)
Seq2RDF: An end-to-end application for deriving Triples from Natural Language Text