학술논문

Trust in human robot collaboration: an exploration of the dynamic s of trust violation and repair
Document Type
Electronic Resource
Author
Source
Subject
In this work we investigate the effects of five communicative trust repair strategies (apology, denial, explanation, compensation, silence) on participants' trust in the robot, following trust violations of two different kinds ( integrity based , competence based ). Additionally, the effect of the trust violations and repair attempts on the perceived humanlikeness of the robot are measured.
Master Thesis
Student Thesis
Language
Abstract
Human robot interactions are ever increasing, and tasks requiring collaboration between humans and robots are becoming prevalent in a varied number of fields such as education, healthcare, in the workplace or in our own homes. Robots are becoming part of a social context with new expectations related to their newfound social roles. Various research has shown that trust is one of the core factors contributing to an efficient collaboration, thus fostering trust within the human robot relationship is crucial. However, just as humans, robots are bound to make mistakes and fail, with these failures leading to a violation of trust. Therefore, research has focused on investigating how to repair this broken trust however results have been mixed. Thus, in this work we investigate the effects of five communicative trust repair strategies (apology, denial, explanation, compensation, silence) on participants' trust in the robot, following trust violations of two different kin ds ( integrity based , competence based ). Additionally, the effect of the trust violations and repair attempts on the perceived humanlikeness of the robot are measured. This is done by conducting an online between subjects experiment, wherein participants are engaging in a collaborative task with the robot, during which the robot repeatedly commits trust violating acts and responds with a repair me ssage. The findings indicate the higher severity of integrity violations on moral trust and willingness to collaborate in the future. Surprisingly, no differential effect between the two violation types on performance trust is found. Moreover, the results suggest that a compensation leads to higher willingness to collaborate and higher trust levels, whilst also resulting in lower discomfort ratings. This work contribute s to the ongoing effort of understanding trust relationships in collaborative HRI contexts.