학술논문

AIの合理性と人間–AI系の合理性を目指す信頼較正 / Trust calibration as rationality for human–AI cooperative decision making
Document Type
Journal Article
Source
認知科学 / Cognitive Studies: Bulletin of the Japanese Cognitive Science Society. 2022, 29(3):364
Subject
AI
human–AI cooperative decision making
rationality
trust calibration
人工知能
人間–AI 協調意思決定
信頼較正
合理性
Language
Japanese
ISSN
1341-7924
1881-5995
Abstract
In this paper, we discuss AI’s rationality and explain the rationality of a human-AI system in our adaptive trust calibration. First, we describe AI’s rationality by introducing the formalization of reinforcement learning. Then we explain our adaptive trust calibration which has been developed for rational human–AI cooperative decision making. Safety and efficiency of human–AI collaboration often depend on how humans could appropriately calibrate their trust towards the AI agents. Over-trusting the autonomous system sometimes causes serious safety issues. Although many studies focused on the importance of system transparency in keeping proper trust calibration, the research in detecting and mitigating improper trust calibration remains very limited. To fill these research gaps, we propose a method of adaptive trust calibration that consists of a framework for detecting the inappropriate calibration status by monitoring the user’s reliance behavior and cognitive cues called “trust calibration cues” to prompt the user to execute trust calibration.