ChatGPT and other AI models are also known for creating many errors and giving false information. So it is a concerning thing that many parents are using AI for their children. The lead author of the study, Leslie-Miller, says that this research was done so the researchers can learn about ChatGPT’s impact and potential industry concerns. Before AI parents used to search about healthcare stuff for their children online but now they ask ChatGPT about it.
For the study, 116 parents were gathered and they were given text papers related to healthcare concerns in children. Half of the texts were generated by AI, mostly ChatGPT, and the other half were written by experts. The results found out that most parents couldn’t distinguish between AI written content and human content. Although they weren’t told that there would be two types of texts, most of them still chose AI written texts as the most reliable.
If parents are going to blindly trust AI, it is important that human domain specific expertise on healthcare information should be presented to parents. AI is also dangerous because it has a tendency to hallucinate which means that it can give responses that are very convincing but in reality, they are made up by it. LLMs are also just trained online which means they do not have real world information and experiences. The lead author suggests that parents should look for an AI system that has been integrated into a system with some expertise. Just stay cautious and always double check AI responses.
Image: DIW-Aigen