학술논문

Artificial intelligence chatbots as sources of patient education material for obstructive sleep apnoea: ChatGPT versus Google Bard.
Document Type
Article
Source
European Archives of Oto-Rhino-Laryngology. Feb2024, Vol. 281 Issue 2, p985-993. 9p.
Subject
*SLEEP apnea syndromes
*CHATGPT
*ARTIFICIAL intelligence
*PATIENT education
*CHATBOTS
Language
ISSN
0937-4477
Abstract
Purpose: To perform the first head-to-head comparative evaluation of patient education material for obstructive sleep apnoea generated by two artificial intelligence chatbots, ChatGPT and its primary rival Google Bard. Methods: Fifty frequently asked questions on obstructive sleep apnoea in English were extracted from the patient information webpages of four major sleep organizations and categorized as input prompts. ChatGPT and Google Bard responses were selected and independently rated using the Patient Education Materials Assessment Tool–Printable (PEMAT-P) Auto-Scoring Form by two otolaryngologists, with a Fellowship of the Royal College of Surgeons (FRCS) and a special interest in sleep medicine and surgery. Responses were subjectively screened for any incorrect or dangerous information as a secondary outcome. The Flesch-Kincaid Calculator was used to evaluate the readability of responses for both ChatGPT and Google Bard. Results: A total of 46 questions were curated and categorized into three domains: condition (n = 14), investigation (n = 9) and treatment (n = 23). Understandability scores for ChatGPT versus Google Bard on the various domains were as follows: condition 90.86% vs.76.32% (p < 0.001); investigation 89.94% vs. 71.67% (p < 0.001); treatment 90.78% vs.73.74% (p < 0.001). Actionability scores for ChatGPT versus Google Bard on the various domains were as follows: condition 77.14% vs. 51.43% (p < 0.001); investigation 72.22% vs. 54.44% (p = 0.05); treatment 73.04% vs. 54.78% (p = 0.002). The mean Flesch–Kincaid Grade Level for ChatGPT was 9.0 and Google Bard was 5.9. No incorrect or dangerous information was identified in any of the generated responses from both ChatGPT and Google Bard. Conclusion: Evaluation of ChatGPT and Google Bard patient education material for OSA indicates the former to offer superior information across several domains. [ABSTRACT FROM AUTHOR]