Abstract
Objective: We aim to develop a solution for rare disease concept normalization based on fine-tuning LLaMA 2, an open-source large language model (LLM), using a domain-specific corpus. Methods and Materials: We fine-tuned four LLaMA2 models, each comprising seven billion parameters, using sentences incorporating clinical concepts from the HPO and OMIM vocabularies. The fine-tuning was conducted on four NVIDIA A100 GPUs. Results: All models proved resilient to newly prompt-engineered sentences not used in the fine-tuning, achieved nearly perfect accuracies when prompted with original training data, and exhibit some robustness to typos. We tested each model on concepts they had not been trained on. The non-synonym HPO model fine-tuned without synonyms achieved 25.2% accuracy, while the synonym HPO model, fine-tuned with half the synonyms, achieved 85.6% accuracy. When tested against concept synonyms from SNOMED-CT, the non-synonym model achieved an accuracy of 33.9% while the synonym model improved to 57.4%. Synonyms proved challenging to both non-synonym and synonym OMIM models. ChatGPT 3.5 correctly identified HPO IDs for four out of 20 prompts. Discussion: Our increasingly fine-tuned models demonstrated growing robustness to challenges such as misspellings, synonyms, and concepts from other ontologies. Incorrect outputs stem from tokens in the input that the models have never encountered, such as parenthesis. Many synonyms do not share the same semantic meaning and often include abbreviations. Conclusion: Our fine-tuned LLaMA 2 models provide the capability to identify variations in medical concepts from clinical narratives while successfully normalizing them to a standard concept.
Competing Interest Statement
The authors have declared no competing interest.