학술논문

Novel Techniques to Assess Predictive Systems and Reduce Their Alarm Burden
Document Type
Periodical
Source
IEEE Journal of Biomedical and Health Informatics IEEE J. Biomed. Health Inform. Biomedical and Health Informatics, IEEE Journal of. 26(10):5267-5278 Oct, 2022
Subject
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Signal Processing and Analysis
Measurement
Bioinformatics
Prediction algorithms
Machine learning algorithms
Costs
Classification algorithms
Biomedical measurement
Artificial intelligence
classification algorithms
health information management
alert fatigue
machine learning
prediction algorithms
predictive medicine
statistical performance
utility metrics
boolean classifiers
Language
ISSN
2168-2194
2168-2208
Abstract
Machine prediction algorithms (e.g., binary classifiers) often are adopted on the basis of claimed performance using classic metrics such as precision and recall. However, classifier performance depends heavily upon the context ( workflow ) in which the classifier operates. Classic metrics do not reflect the realized performance of a predictor unless certain implicit assumptions are met, and these assumptions cannot be met in many common clinical scenarios. This often results in suboptimal implementations and in disappointment when expected outcomes are not achieved. One common failure mode for classic metrics arises when multiple predictions can be made for the same event, particularly when redundant true positive predictions produce little additional value. This describes many clinical alerting systems. We explain why classic metrics cannot correctly represent predictor performance in such contexts, and introduce an improved performance assessment technique using utility functions to score predictions based on their utility in a specific workflow context. The resulting utility metrics ( u-metrics ) explicitly account for the effects of temporal relationships and other sources of variability in prediction utility. Compared to traditional measures, u-metrics more accurately reflect the real-world costs and benefits of a predictor operating in a realized context. The improvement can be significant. We also describe a formal approach to snoozing , a mitigation strategy in which some predictions are suppressed to improve predictor performance by reducing false positives while retaining event capture. Snoozing is especially useful for predictors that generate interruptive alarms. U-metrics correctly measure and predict the performance benefits of snoozing, whereas traditional metrics do not.