학술논문

MUST: A Multilingual Student-Teacher Learning Approach for Low-Resource Speech Recognition
Document Type
Conference
Source
2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) Automatic Speech Recognition and Understanding Workshop (ASRU), 2023 IEEE. :1-6 Dec, 2023
Subject
Signal Processing and Analysis
Training
Error analysis
Conferences
Automatic speech recognition
multilingual
knowledge distillation
automatic speech recognition
low-resource languages
Language
Abstract
Student-teacher learning or knowledge distillation (KD) has been previously used to address data scarcity issue for training of speech recognition (ASR) systems. However, a limitation of KD training is that the student model classes must be a proper or improper subset of the teacher model classes. It prevents distillation from even acoustically similar languages if the character sets are not same. In this work, the aforementioned limitation is addressed by proposing a MUltilingual Student-Teacher (MUST) learning which exploits a posteriors mapping approach. A pre-trained mapping model is used to map posteriors from a teacher language to the student language ASR. These mapped posteriors are used as soft labels for KD learning. Various teacher ensemble schemes are experimented to train an ASR model for low-resource languages. A model trained with MUST learning reduces relative character error rate (CER) up to $9.5 \%$ in comparison with a baseline monolingual ASR.