학술논문

Explainable artificial intelligence in emergency medicine: an overview
Document Type
article
Source
Clinical and Experimental Emergency Medicine, Vol 10, Iss 4, Pp 354-362 (2023)
Subject
artificial intelligence
machine learning
resuscitation
emergency medicine
Medical emergencies. Critical care. Intensive care. First aid
RC86-88.9
Language
English
ISSN
2383-4625
Abstract
Artificial intelligence (AI) and machine learning (ML) have potential to revolutionize emergency medical care by enhancing triage systems, improving diagnostic accuracy, refining prognostication, and optimizing various aspects of clinical care. However, as clinicians often lack AI expertise, they might perceive AI as a “black box,” leading to trust issues. To address this, “explainable AI,” which teaches AI functionalities to end-users, is important. This review presents the definitions, importance, and role of explainable AI, as well as potential challenges in emergency medicine. First, we introduce the terms explainability, interpretability, and transparency of AI models. These terms sound similar but have different roles in discussion of AI. Second, we indicate that explainable AI is required in clinical settings for reasons of justification, control, improvement, and discovery and provide examples. Third, we describe three major categories of explainability: pre-modeling explainability, interpretable models, and post-modeling explainability and present examples (especially for post-modeling explainability), such as visualization, simplification, text justification, and feature relevance. Last, we show the challenges of implementing AI and ML models in clinical settings and highlight the importance of collaboration between clinicians, developers, and researchers. This paper summarizes the concept of “explainable AI” for emergency medicine clinicians. This review may help clinicians understand explainable AI in emergency contexts.