arXiv Analytics

Sign in

arXiv:2301.10226 [cs.LG]AbstractReferencesReviewsResources

A Watermark for Large Language Models

John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein

Published 2023-01-24Version 1

Potential harms of large language models can be mitigated by watermarking model output, i.e., embedding signals into generated text that are invisible to humans but algorithmically detectable from a short span of tokens. We propose a watermarking framework for proprietary language models. The watermark can be embedded with negligible impact on text quality, and can be detected using an efficient open-source algorithm without access to the language model API or parameters. The watermark works by selecting a randomized set of whitelist tokens before a word is generated, and then softly promoting use of whitelist tokens during sampling. We propose a statistical test for detecting the watermark with interpretable p-values, and derive an information-theoretic framework for analyzing the sensitivity of the watermark. We test the watermark using a multi-billion parameter model from the Open Pretrained Transformer (OPT) family, and discuss robustness and security.

Comments: 12 pages in the main body. Code will be available at github.com/jwkirchenbauer/lm-watermarking
Categories: cs.LG, cs.CL, cs.CR
Related articles: Most relevant | Search more
arXiv:2306.04634 [cs.LG] (Published 2023-06-07)
On the Reliability of Watermarks for Large Language Models
arXiv:2305.15594 [cs.LG] (Published 2023-05-24)
Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models
arXiv:2309.00254 [cs.LG] (Published 2023-09-01)
Why do universal adversarial attacks work on large language models?: Geometry might be the answer