학술논문

Human As Automation Failsafe: Concept, Implications, Guidelines and Innovations
Document Type
Article
Source
Proceedings of the Human Factors and Ergonomics Society Annual Meeting; September 2022, Vol. 66 Issue: 1 p100-104, 5p
Subject
Language
ISSN
10711813; 21695067
Abstract
Humans are frequently left to “backstop” automated systems, and Human Factors specialists have argued against this for decades with, at best, partial success. What if we took a different tack… and designed to support it? The participants were involved in a recent effort to review and document cases across multiple domains where operators acted as a “failsafe” for automation, intervening in unanticipated situations to maximize success and minimize damage. We defined a “Human As Failsafe” (HAF) incident and then investigated conditions and practices making HAF success more or less likely. Analyzing these historical incidents, we suggested remediation approaches. The project also examined the legal concept of culpability (i.e., when intervention should have happened but didn’t) and proposed a state-machine-based analytic simulation to identify when HAF interventions are plausible. The panel objective will be to briefly present these concepts, but more generally to discuss designing for inevitable HAF events.

Online Access