KOR

e-Article

Robust Dual-Modal Speech Keyword Spotting for XR Headsets
Document Type
Periodical
Author
Source
IEEE Transactions on Visualization and Computer Graphics IEEE Trans. Visual. Comput. Graphics Visualization and Computer Graphics, IEEE Transactions on. 30(5):2507-2516 May, 2024
Subject
Computing and Processing
Bioengineering
Signal Processing and Analysis
Speech recognition
Headphones
Hidden Markov models
Noise measurement
Sensors
X reality
Mouth
Speech interaction
extended reality
keyword spotting
multimodal interaction
Language
ISSN
1077-2626
1941-0506
2160-9306
Abstract
While speech interaction finds widespread utility within the Extended Reality (XR) domain, conventional vocal speech keyword spotting systems continue to grapple with formidable challenges, including suboptimal performance in noisy environments, impracticality in situations requiring silence, and susceptibility to inadvertent activations when others speak nearby. These challenges, however, can potentially be surmounted through the cost-effective fusion of voice and lip movement information. Consequently, we propose a novel vocal-echoic dual-modal keyword spotting system designed for XR headsets. We devise two different modal fusion approches and conduct experiments to test the system's performance across diverse scenarios. The results show that our dual-modal system not only consistently outperforms its single-modal counterparts, demonstrating higher precision in both typical and noisy environments, but also excels in accurately identifying silent utterances. Furthermore, we have successfully applied the system in real-time demonstrations, achieving promising results. The code is available at https://github.com/caizhuojiang/VE-KWS.