학술논문

Enhancing Patient Communication With Chat-GPT in Radiology: Evaluating the Efficacy and Readability of Answers to Common Imaging-Related Questions.
Document Type
Academic Journal
Author
Gordon EB; Department of Radiology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Clinical Associate, Department of Radiology, Duke University Medical Center, Department of Radiology, Durham, North Carolina. Electronic address: emile.gordon@duke.edu.; Towbin AJ; Professor and Associate Chief, Department of Radiology (Clinical Operations and Informatics), Neil D. Johnson Chair of Radiology Informatics, University of Cincinnati, Cincinnati, Ohio.; Wingrove P; Department of Radiology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.; Shafique U; Assistant Professor, Department of Radiology, Indiana University School of Medicine, Indianapolis, Indiana.; Haas B; Professor, Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, California.; Kitts AB; Lung Cancer Patient Advocate, Rescue Lung Society, Amesbury, Massachusetts.; Feldman J; Lung Cancer Patient Advocate, EGFR (Epidermal Growth Factor Receptor) Resisters, Deerfield, Illinois.; Furlan A; Associate Professor, Department of Radiology, Section Chief, Abdominal Imaging, and Medical Director, Radiology Practice and Operational Excellence, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.
Source
Publisher: Elsevier Country of Publication: United States NLM ID: 101190326 Publication Model: Print-Electronic Cited Medium: Internet ISSN: 1558-349X (Electronic) Linking ISSN: 15461440 NLM ISO Abbreviation: J Am Coll Radiol Subsets: MEDLINE
Subject
Language
English
Abstract
Purpose: To assess ChatGPT's accuracy, relevance, and readability in answering patients' common imaging-related questions and examine the effect of a simple prompt.
Methods: A total of 22 imaging-related questions were developed from categories previously described as important to patients, as follows: safety, the radiology report, the procedure, preparation before imaging, meaning of terms, and medical staff. These questions were posed to ChatGPT with and without a short prompt instructing the model to provide an accurate and easy-to-understand response for the average person. Four board-certified radiologists evaluated the answers for accuracy, consistency, and relevance. Two patient advocates also reviewed responses for their utility for patients. Readability was assessed using the Flesch Kincaid Grade Level. Statistical comparisons were performed using χ 2 and paired t tests.
Results: A total of 264 answers were assessed for both unprompted and prompted questions. Unprompted responses were accurate 83% of the time (218 of 264), which did not significantly change for prompted responses (87% [229 of 264]; P = .2). The consistency of the responses increased from 72% (63 of 88) to 86% (76 of 88) when prompts were given (P = .02). Nearly all responses (99% [261 of 264]) were at least partially relevant for both question types. Fewer unprompted responses were considered fully relevant at 67% (176 of 264), although this increased significantly to 80% when prompts were given (210 of 264; P = .001). The average Flesch Kincaid Grade Level was high at 13.6 [CI, 12.9-14.2], unchanged with the prompt (13.0 [CI, 12.41-13.60], P = .2). None of the responses reached the eighth-grade readability level recommended for patient-facing materials.
Discussion: ChatGPT demonstrates the potential to respond accurately, consistently, and relevantly to patients' imaging-related questions. However, imperfect accuracy and high complexity necessitate oversight before implementation. Prompts reduced response variability and yielded more-targeted information, but they did not improve readability. ChatGPT has the potential to increase accessibility to health information and streamline the production of patient-facing educational materials; however, its current limitations require cautious implementation and further research.
(Copyright © 2023. Published by Elsevier Inc.)