학술논문

Using relative head and hand-target features to predict intention in 3D moving-target selection
Document Type
Conference
Source
2014 IEEE Virtual Reality (VR) Virtual Reality (VR), 2014 iEEE. :51-56 Mar, 2014
Subject
Computing and Processing
Accuracy
Predictive models
Decision trees
Three-dimensional displays
Solid modeling
Virtual reality
Human computer interaction
H.5.2 [Information interfaces and presentation]: User Interfaces — Interaction Styles, Theory and methods
I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism — Virtual Reality
I.5.4 [Pattern Recognition]: Applications
Language
ISSN
1087-8270
2375-5334
Abstract
Selection of moving targets is a common, yet complex task in human-computer interaction (HCI) and virtual reality (VR). Predicting user intention may be beneficial to address the challenges inherent in interaction techniques for moving-target selection. This article extends previous models by integrating relative head-target and hand-target features to predict intended moving targets. The features are calculated in a time window ending at roughly two-thirds of the total target selection time and evaluated using decision trees. With two targets, this model is able to predict user choice with up to ∼ 72% accuracy on general moving-target selection tasks and up to ∼ 78% by also including task-related target properties.