학술논문

Virtual Personal Assistant Design Effects on Memory Encoding
Document Type
Conference
Source
2022 Systems and Information Engineering Design Symposium (SIEDS) Systems and Information Engineering Design Symposium (SIEDS), 2022. :173-177 Apr, 2022
Subject
Computing and Processing
Engineering Profession
General Topics for Engineers
Robotics and Control Systems
Transportation
Visualization
Cognitive processes
Virtual assistants
Lips
Mouth
Data collection
Market research
virtual personal assistant (VPA)
lip reading
audio-visual
active attention
cross-modal correspondence
Language
Abstract
Virtual personal assistants (VPAs) like Siri and Alexa have become common objects in households. Users frequently rely on these systems to search the internet or help retrieve information. As such, it is important to know how using these products affect cognitive processes like memory. Previous research suggests that visual speech perception influences auditory perception in human-human interactions. However, many of these VPAs are designed as a box or sphere that does not interact with the user visually. This lack of visual speech perception when interacting with a VPA could affect the human interaction with a system and their retention of information such as determining how many ounces are in a cup or how to greet someone in another language. This poses the question of whether the design of these VPAs is preventing the ability of users to retain the information they get from these systems. To test this, we designed an experiment that will explore interactions between user memory and either a traditional audio presentation (as is found with Siri or Alexa, for example) or one that allows for visual speech perception. Participants were asked to listen to an audio clip of a nonsensical story. In one condition, participants were asked to listen while looking at a blank screen (analogous to the lack of visual feedback inherent when working with current VPA designs). After a block of 25 audio clips, the participants took a test on the information heard. This process was repeated with an animated face with synchronized mouth movements instead of a black screen. Other participants will experience the same two presentations, but in reverse order as to counterbalance condition presentation. Data collection is currently underway. We predicted that VPA paired with synchronized lip movement would promote visual speech perception and thus help participants retain information. While we are still collecting data, the trend currently does not show a significant difference between audio and lip movement conditions. This could be an indication of differing abilities in lipreading.