arXiv Analytics

Sign in

arXiv:2306.04634 [cs.LG]AbstractReferencesReviewsResources

On the Reliability of Watermarks for Large Language Models

John Kirchenbauer, Jonas Geiping, Yuxin Wen, Manli Shu, Khalid Saifullah, Kezhi Kong, Kasun Fernando, Aniruddha Saha, Micah Goldblum, Tom Goldstein

Published 2023-06-07Version 1

Large language models (LLMs) are now deployed to everyday use and positioned to produce large quantities of text in the coming decade. Machine-generated text may displace human-written text on the internet and has the potential to be used for malicious purposes, such as spearphishing attacks and social media bots. Watermarking is a simple and effective strategy for mitigating such harms by enabling the detection and documentation of LLM-generated text. Yet, a crucial question remains: How reliable is watermarking in realistic settings in the wild? There, watermarked text might be mixed with other text sources, paraphrased by human writers or other language models, and used for applications in a broad number of domains, both social and technical. In this paper, we explore different detection schemes, quantify their power at detecting watermarks, and determine how much machine-generated text needs to be observed in each scenario to reliably detect the watermark. We especially highlight our human study, where we investigate the reliability of watermarking when faced with human paraphrasing. We compare watermark-based detection to other detection strategies, finding overall that watermarking is a reliable solution, especially because of its sample complexity - for all attacks we consider, the watermark evidence compounds the more examples are given, and the watermark is eventually detected.

Comments: 14 pages in the main body. Code is available at https://github.com/jwkirchenbauer/lm-watermarking
Categories: cs.LG, cs.CL, cs.CR
Related articles: Most relevant | Search more
arXiv:2309.00254 [cs.LG] (Published 2023-09-01)
Why do universal adversarial attacks work on large language models?: Geometry might be the answer
arXiv:2301.10226 [cs.LG] (Published 2023-01-24)
A Watermark for Large Language Models
arXiv:2305.15594 [cs.LG] (Published 2023-05-24)
Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models