arXiv Analytics

Sign in

arXiv:2205.12615 [cs.LG]AbstractReferencesReviewsResources

Autoformalization with Large Language Models

Yuhuai Wu, Albert Q. Jiang, Wenda Li, Markus N. Rabe, Charles Staats, Mateja Jamnik, Christian Szegedy

Published 2022-05-25Version 1

Autoformalization is the process of automatically translating from natural language mathematics to formal specifications and proofs. A successful autoformalization system could advance the fields of formal verification, program synthesis, and artificial intelligence. While the long-term goal of autoformalization seemed elusive for a long time, we show large language models provide new prospects towards this goal. We make the surprising observation that LLMs can correctly translate a significant portion ($25.3\%$) of mathematical competition problems perfectly to formal specifications in Isabelle/HOL. We demonstrate the usefulness of this process by improving a previously introduced neural theorem prover via training on these autoformalized theorems. Our methodology results in a new state-of-the-art result on the MiniF2F theorem proving benchmark, improving the proof rate from $29.6\%$ to $35.2\%$.

Related articles: Most relevant | Search more
arXiv:2306.03438 [cs.LG] (Published 2023-06-06)
Large Language Models of Code Fail at Completing Code with Potential Bugs
arXiv:2305.05176 [cs.LG] (Published 2023-05-09)
FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance
arXiv:2306.05052 [cs.LG] (Published 2023-06-08)
Interpretable Medical Diagnostics with Structured Data Extraction by Large Language Models