Uncategorized

An Application in Detoxifying Large Language Models



Download a PDF of the paper titled Contrastive Perplexity for Controlled Generation: An Application in Detoxifying Large Language Models, by Tassilo Klein and 1 other authors

Download PDF

Abstract:The generation of undesirable and factually incorrect content of large language models poses a significant challenge and remains largely an unsolved issue. This paper studies the integration of a contrastive learning objective for fine-tuning LLMs for implicit knowledge editing and controlled text generation. Optimizing the training objective entails aligning text perplexities in a contrastive fashion. To facilitate training the model in a self-supervised fashion, we leverage an off-the-shelf LLM for training data generation. We showcase applicability in the domain of detoxification. Herein, the proposed approach leads to a significant decrease in the generation of toxic content while preserving general utility for downstream tasks such as commonsense reasoning and reading comprehension. The proposed approach is conceptually simple but empirically powerful.

Submission history

From: Tassilo Klein [view email]
[v1]
Tue, 16 Jan 2024 16:49:39 UTC (1,220 KB)
[v2]
Wed, 24 Jan 2024 23:04:02 UTC (1,220 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *