Uncategorized

[2402.09132] Exploring the Adversarial Capabilities of Large Language Models



Download a PDF of the paper titled Exploring the Adversarial Capabilities of Large Language Models, by Lukas Struppek and 3 other authors

Download PDF

Abstract:The proliferation of large language models (LLMs) has sparked widespread and general interest due to their strong language generation capabilities, offering great potential for both industry and research. While previous research delved into the security and privacy issues of LLMs, the extent to which these models can exhibit adversarial behavior remains largely unexplored. Addressing this gap, we investigate whether common publicly available LLMs have inherent capabilities to perturb text samples to fool safety measures, so-called adversarial examples resp.~attacks. More specifically, we investigate whether LLMs are inherently able to craft adversarial examples out of benign samples to fool existing safe rails. Our experiments, which focus on hate speech detection, reveal that LLMs succeed in finding adversarial perturbations, effectively undermining hate speech detection systems. Our findings carry significant implications for (semi-)autonomous systems relying on LLMs, highlighting potential challenges in their interaction with existing systems and safety measures.

Submission history

From: Lukas Struppek [view email]
[v1]
Wed, 14 Feb 2024 12:28:38 UTC (86 KB)
[v2]
Thu, 15 Feb 2024 06:39:48 UTC (86 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *