학술논문

Reconstructing the Dynamic Directivity of Unconstrained Speech
Document Type
Conference
Source
2023 Immersive and 3D Audio: from Architecture to Automotive (I3DA) Immersive and 3D Audio: from Architecture to Automotive (I3DA), 2023. :1-13 Sep, 2023
Subject
Communication, Networking and Broadcast Technologies
Fields, Waves and Electromagnetics
Signal Processing and Analysis
Training
Solid modeling
Tracking
Natural languages
Machine learning
Microphone arrays
Frequency measurement
virtual communication
speech directivity estimation
vocal presence
spatial audio
machine learning
unconstrained speech modeling
soundfield reconstruction
Language
Abstract
This article presents a method for estimating and reconstructing the spatial energy distribution pattern of natural speech, which is crucial for achieving realistic vocal presence in virtual communication settings. The method comprises two stages. First, recordings of speech captured by a real, static microphone array are used to create an egocentric virtual array that tracks the movement of the speaker over time. This virtual array is used to measure and encode the high-resolution directivity pattern of the speech signal as it evolves dynamically with natural speech and movement. In the second stage, the encoded directivity representation is utilized to train a machine learning model that can estimate the full, dynamic directivity pattern given a limited set of speech signals, such as those recorded using the microphones on a head-mounted display. Our results show that neural networks can accurately estimate the full directivity pattern of natural, unconstrained speech from limited information. The proposed method for estimating and reconstructing the spatial energy distribution pattern of natural speech, along with the evaluation of various machine learning models and training paradigms, provides an important contribution to the development of realistic vocal presence in virtual communication settings.