Uncategorized

Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive


A new study finds disturbing and pervasive errors among three popular models on a wide range of legal tasks.

“Disturbing & pervasive errors among three popular models on a wide range of legal tasks” study found

“hallucination rates range from 69% to 88% in response to specific legal queries for state-of-the-art language models”

“performance deteriorates when dealing with more complex tasks that require a nuanced understanding of legal issues or interpretation of legal texts”

“case law from lower courts … subject to more frequent hallucinations than case law from higher courts

“model susceptibility to what we call “contra-factual bias,” namely the tendency to assume that a factual premise in a query is true, even if it is flatly wrong”

https://hai.stanford.edu/news/hallucinating-law-legal-mistakes-large-language-models-are-pervasive

https://arxiv.org/pdf/2401.01301.pdf



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *