Uncategorized

Large Language Models (LLMs) and Social Determinants of Health (SDoH)



Artificial Intelligence (AI) has been making waves in various sectors, and healthcare is no exception. A recent study led by investigators from Mass General Brigham has shed light on how Large Language Models (LLMs), a type of generative AI, can be used to extract crucial information on social determinants of health (SDoH) from clinicians’ notes. The results are promising, with these AI models identifying a staggering 93.8% of patients with adverse SDoH, a stark improvement from the mere 2% identified using official diagnostic codes. The study also addressed the important issue of bias in AI models.

Large Language Models: A Game-Changer in Healthcare

Generative AI, especially LLMs, have the potential to revolutionize the way healthcare professionals collect and interpret patient data. The Mass General Brigham study found that LLMs could successfully identify references to SDoH in clinician notes, including details related to employment status, housing, transportation, parental status, relationships, and social support. This could greatly assist in identifying patients who may benefit from additional resource support.

The study used Flan T5 models, fine-tuned for this specific task. The Flan T5 XL model proved effective in identifying any mentions of SDoH (macro F1 0.71), while the Flan T5 XXL model excelled in spotting adverse SDoH mentions (macro F1 0.70). The researchers also used synthetic data generated by Large LM to enhance the performance of smaller Flan T5 models.

Addressing the Issue of Bias in AI

While AI models hold great potential, they are not without their flaws. One such concern is the issue of algorithmic bias. The researchers found that generalist LMs were more prone to bias than fine-tuned LMs. However, by manually reviewing 800 clinician notes and using ChatGPT to produce an additional 900 synthetic examples of SDoH sentences, the researchers were able to train the existing LMs to consistently identify rare references to SDoH in clinician notes. This approach could serve as a strategy to reduce algorithmic bias, although the researchers emphasize the need for further research in this area.

AI in Government Health Services

The potential of AI in healthcare is not lost on government entities. The Department of Veterans Affairs (VA), for instance, adheres to the nine principles for the use of AI in government outlined in Executive Order EO 13960. The VA has multiple AI use cases, including AI-capable radiology devices, machine learning-based applications for informed clinical decision-making, and AI Natural Language Processing (NLP) to extract and predict outcomes from medical records and imaging data.

Future Prospects

The use of generative AI, specifically LLMs like ChatGPT, to automatically extract a patient’s social needs from clinical records, could potentially reduce the burden on staff members to screen patients for social determinants of health. However, the use of Large LMs is known to perpetuate biases and there have been variations in SDoH predictions based on patients’ race and gender. While there are concerns about the generalizability of the results to diverse populations, there is a growing push to collect and use SDoH data from patients. This, coupled with the fact that some health systems are already piloting the use of large language models to extract social determinants from medical records, points to a promising future for AI in healthcare.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *