학술논문

Generative Speech Recognition Error Correction With Large Language Models and Task-Activating Prompting
Document Type
Conference
Source
2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) Automatic Speech Recognition and Understanding Workshop (ASRU), 2023 IEEE. :1-8 Dec, 2023
Subject
Signal Processing and Analysis
Error analysis
Conferences
Pipelines
Acoustics
Error correction
Task analysis
Standards
large language model
N-best rescoring
instruction prompting
few-shot learning
in-context learning
Language
Abstract
We explore the ability of large language models (LLMs) to act as speech recognition post-processors that perform rescoring and error correction. Our first focus is on instruction prompting to let LLMs perform these task without fine-tuning, for which we evaluate different prompting schemes, both zeroand few-shot in-context learning, and a novel “task activation“ prompting method that combines causal instructions and demonstration to increase its context windows. Next, we show that rescoring only by in-context learning with frozen LLMs achieves results that are competitive with rescoring by domain-tuned LMs, using a pretrained first-pass recognition system and rescoring output on two out-of-domain tasks (ATIS and WSJ). By combining prompting techniques with fine-tuning we achieve error rates below the N-best oracle level, showcasing the generalization power of the LLMs.