Uncategorized

Hallucination rates of large language models – BOLD



Join two former teachers – Libby Hills from the Jacobs Foundation and AI researcher Owen Henkel – for the Ed-Technical podcast series about AI in education.

Each episode, guest hosted on BOLD, Libby and Owen will ask experts to help educators sift the useful insights from the AI hype. They’ll be asking questions such as “How does AI really help students and teachers?”, “What do we actually know about this technology, and what’s just speculation?” and “When we say ‘AI’, what are we actually talking about?”

This week, in a mini episode, the gloves come off as Libby and Owen engage in a lively debate about the “hallucinations” in large language models (e.g. unexpected and hard to explain errors) and their impact on building educational products.

They spar on the nuances of model hallucinations, discussing the various forms and potential consequences. Owen presents a “spicy take” on the matter, advocating for the value of engagement and interaction even if it means accepting a certain level of inaccuracy. 

Libby, however, expresses concerns about the accuracy of information in educational settings, particularly in K-12 schools. She emphasizes the importance of the high bar set by traditional educational tools in terms of factual correctness.

Who scores an ed-technical knockout?  You, the listeners, will decide!



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *