Uncategorized

Large language models validate misinformation, research finds


New research into large language models shows that they repeat conspiracy theories, harmful stereotypes, and other forms of misinformation.
In a recent study, researchers at the University of Waterloo systematically tested an early version of ChatGPT’s understanding of statements in six categories: facts, conspiracies, controversies, misconceptions, stereotypes, and fiction.…

This story appeared on uwaterloo.ca, 2024-01-01.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *