“Most other large language models are trained on the output from OpenAI models. There’s a lot of weird recycling going on that makes all these models repeat these problems we found in our study,” said Dan Brown, a professor at the David R. Cheriton School of Computer Science.