arXiv Analytics

Sign in

arXiv:2308.14132 [cs.CL]AbstractReferencesReviewsResources

Detecting Language Model Attacks with Perplexity

Gabriel Alon, Michael Kamfonas

Published 2023-08-27Version 1

A novel hack involving Large Language Models (LLMs) has emerged, leveraging adversarial suffixes to trick models into generating perilous responses. This method has garnered considerable attention from reputable media outlets such as the New York Times and Wired, thereby influencing public perception regarding the security and safety of LLMs. In this study, we advocate the utilization of perplexity as one of the means to recognize such potential attacks. The underlying concept behind these hacks revolves around appending an unusually constructed string of text to a harmful query that would otherwise be blocked. This maneuver confuses the protective mechanisms and tricks the model into generating a forbidden response. Such scenarios could result in providing detailed instructions to a malicious user for constructing explosives or orchestrating a bank heist. Our investigation demonstrates the feasibility of employing perplexity, a prevalent natural language processing metric, to detect these adversarial tactics before generating a forbidden response. By evaluating the perplexity of queries with and without such adversarial suffixes using an open-source LLM, we discovered that nearly 90 percent were above a perplexity of 1000. This contrast underscores the efficacy of perplexity for detecting this type of exploit.

Related articles: Most relevant | Search more
arXiv:2404.07921 [cs.CL] (Published 2024-04-11)
AmpleGCG: Learning a Universal and Transferable Generative Model of Adversarial Suffixes for Jailbreaking Both Open and Closed LLMs
arXiv:2410.23771 [cs.CL] (Published 2024-10-31)
What is Wrong with Perplexity for Long-context Language Modeling?
Lizhe Fang et al.
arXiv:2210.05892 [cs.CL] (Published 2022-10-12)
Perplexity from PLM Is Unreliable for Evaluating Text Quality