학술논문

Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments.
Document Type
Academic Journal
Author
Vatral C; Open Ended Learning Environments, Department of Computer Science, Institute for Software Integrated Systems, Vanderbilt University, Nashville, TN, United States.; Biswas G; Open Ended Learning Environments, Department of Computer Science, Institute for Software Integrated Systems, Vanderbilt University, Nashville, TN, United States.; Cohn C; Open Ended Learning Environments, Department of Computer Science, Institute for Software Integrated Systems, Vanderbilt University, Nashville, TN, United States.; Davalos E; Open Ended Learning Environments, Department of Computer Science, Institute for Software Integrated Systems, Vanderbilt University, Nashville, TN, United States.; Mohammed N; Open Ended Learning Environments, Department of Computer Science, Institute for Software Integrated Systems, Vanderbilt University, Nashville, TN, United States.
Source
Publisher: Frontiers Media SA Country of Publication: Switzerland NLM ID: 101770551 Publication Model: eCollection Cited Medium: Internet ISSN: 2624-8212 (Electronic) Linking ISSN: 26248212 NLM ISO Abbreviation: Front Artif Intell Subsets: PubMed not MEDLINE
Subject
Language
English
Abstract
Simulation-based training (SBT) programs are commonly employed by organizations to train individuals and teams for effective workplace cognitive and psychomotor skills in a broad range of applications. Distributed cognition has become a popular cognitive framework for the design and evaluation of these SBT environments, with structured methodologies such as Distributed Cognition for Teamwork (DiCoT) used for analysis. However, the analysis and evaluations generated by such distributed cognition frameworks require extensive domain-knowledge and manual coding and interpretation, and the analysis is primarily qualitative. In this work, we propose and develop the application of multimodal learning analysis techniques to SBT scenarios. Using these analysis methods, we can use the rich multimodal data collected in SBT environments to generate more automated interpretations of trainee performance that supplement and extend traditional DiCoT analysis. To demonstrate the use of these methods, we present a case study of nurses training in a mixed-reality manikin-based (MRMB) training environment. We show how the combined analysis of the video, speech, and eye-tracking data collected as the nurses train in the MRMB environment supports and enhances traditional qualitative DiCoT analysis. By applying such quantitative data-driven analysis methods, we can better analyze trainee activities online in SBT and MRMB environments. With continued development, these analysis methods could be used to provide targeted feedback to learners, a detailed review of training performance to the instructors, and data-driven evidence for improving the environment to simulation designers.
Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
(Copyright © 2022 Vatral, Biswas, Cohn, Davalos and Mohammed.)