학술논문

Physics-based Concatenative Sound Synthesis of Photogrammetric models for Aural and Haptic Feedback in Virtual Environments
Document Type
Conference
Source
2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), 2020 IEEE Conference on. :376-379 Mar, 2020
Subject
Computing and Processing
General Topics for Engineers
Robotics and Control Systems
Haptic interfaces
Cascading style sheets
Engines
Physics
Data models
Data mining
Three-dimensional displays
Sound Synthesis
Interaction
Virtual Reality
Sonic Interaction Design
Language
Abstract
We present a novel physics-based concatenative sound synthesis (CSS) methodology for congruent interactions across physical, graphical, aural and haptic modalities in Virtual Environments. Navigation in aural and haptic corpora of annotated audio units is driven by user interactions with highly realistic photogrammetric based models in a game engine, where automated and interactive positional, physics and graphics data are supported. From a technical perspective, the current contribution expands existing CSS frameworks in avoiding mapping or mining the annotation data to real-time performance attributes, while guaranteeing degrees of novelty and variation for the same gesture.