학술논문

A Registration Framework for the Comparison of Video and Optical See-Through Devices in Interactive Augmented Reality
Document Type
Periodical
Source
IEEE Access Access, IEEE. 9:64828-64843 2021
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Task analysis
Cameras
Calibration
Optical distortion
Augmented reality
Adaptive optics
Resists
Augmented reality device calibration
depth perception in AR
egocentric perception
interactive AR
reference frame alignment
user experience in AR
Language
ISSN
2169-3536
Abstract
In this paper, we designed a registration framework that can be used to develop augmented reality environments, where all the real (including the users) and virtual elements are co-localized and registered in a common reference frame. The software is provided together with this paper, to contribute to the research community. The developed framework allows us to perform a quantitative assessment of interaction and egocentric perception in Augmented Reality (AR) environments. We assess perception and interaction in the peripersonal space through a 3D blind reaching task in a simple scenario and an interaction task in a kitchen scenario using both video (VST) and optical see-through (OST) head-worn technologies. Moreover, we carry out the same 3D blind reaching task in real condition (without head-mounted display and reaching real targets). This provides a baseline performance with which to compare the two augmented reality technologies. The blind reaching task results show an underestimation of distances with OST devices and smaller estimation errors in frontal spatial positions when the depth does not change. This happens with both OST and VST devices, compared with the real-world baseline. Such errors are compensated in the interaction kitchen scenario task. Thanks to the egocentric viewing geometry and the specific required task, which constrain the position perception on a table, both VST and OST have comparable and effective performance. Thus, our results show that such technologies have issues, though they can be effectively used in specific real tasks. This does not allow us to choose between VST and OST devices. Still, it provides a baseline and a registration framework for further studies and emphasizes the specificity of perception in interactive AR.