Effectiveness of a large language model for clinical information retrieval regarding shoulder arthroplasty.

TitleEffectiveness of a large language model for clinical information retrieval regarding shoulder arthroplasty.
Publication TypeJournal Article
Year of Publication2024
AuthorsOeding JF, Lu AZ, Mazzucco M, Fu MC, Dines DM, Warren RF, Gulotta LV, Dines JS, Kunze KN
JournalJ Exp Orthop
Volume11
Issue4
Paginatione70114
Date Published2024 Oct
ISSN2197-1153
Abstract

PURPOSE: To determine the scope and accuracy of medical information provided by ChatGPT-4 in response to clinical queries concerning total shoulder arthroplasty (TSA), and to compare these results to those of the Google search engine.

METHODS: A patient-replicated query for 'total shoulder replacement' was performed using both Google Web Search (the most frequently used search engine worldwide) and ChatGPT-4. The top 10 frequently asked questions (FAQs), answers, and associated sources were extracted. This search was performed again independently to identify the top 10 FAQs necessitating numerical responses such that the concordance of answers could be compared between Google and ChatGPT-4. The clinical relevance and accuracy of the provided information were graded by two blinded orthopaedic shoulder surgeons.

RESULTS: Concerning FAQs with numeric responses, 8 out of 10 (80%) had identical answers or substantial overlap between ChatGPT-4 and Google. Accuracy of information was not significantly different (p = 0.32). Google sources included 40% medical practices, 30% academic, 20% single-surgeon practice, and 10% social media, while ChatGPT-4 used 100% academic sources, representing a statistically significant difference (p = 0.001). Only 3 out of 10 (30%) FAQs with open-ended answers were identical between ChatGPT-4 and Google. The clinical relevance of FAQs was not significantly different (p = 0.18). Google sources for open-ended questions included academic (60%), social media (20%), medical practice (10%) and single-surgeon practice (10%), while 100% of sources for ChatGPT-4 were academic, representing a statistically significant difference (p = 0.0025).

CONCLUSION: ChatGPT-4 provided trustworthy academic sources for medical information retrieval concerning TSA, while sources used by Google were heterogeneous. Accuracy and clinical relevance of information were not significantly different between ChatGPT-4 and Google.

LEVEL OF EVIDENCE: Level IV cross-sectional.

DOI10.1002/jeo2.70114
Alternate JournalJ Exp Orthop
PubMed ID39691559
PubMed Central IDPMC11649951

Person Type: