학술논문

SLM: Bridge the Thin Gap Between Speech and Text Foundation Models
Document Type
Conference
Source
2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) Automatic Speech Recognition and Understanding Workshop (ASRU), 2023 IEEE. :1-8 Dec, 2023
Subject
Signal Processing and Analysis
Bridges
Adaptation models
Conferences
Real-time systems
Question answering (information retrieval)
Encoding
Task analysis
Language
Abstract
We present a joint Speech and Language Model (SLM), a multitask, multilingual, and dual-modal model that takes advantage of pretrained foundational speech and language models. SLM freezes the pretrained foundation models to maximally preserves their capabilities, and only trains a simple adapter with just 1% (156M) of the foundation models’ parameters. This adaptation not only leads SLM to achieve strong performance on conventional tasks such as automatic speech recognition (ASR) and automatic speech translation (AST), but also unlocks the novel capability of zero-shot instruction-following for more diverse tasks. Given a speech input and a text instruction, SLM is able to perform unseen generation tasks including contextual biasing ASR using real-time context, dialog generation, speech continuation, and question answering. Our approach demonstrates that the representational gap between pretrained speech and language models is narrower than one would expect, and can be bridged by a simple adaptation mechanism. As a result, SLM is not only efficient to train, but also inherits strong capabilities already present in foundation models of different modalities.