학술논문

Visualising and Explaining Deep Learning Models for Speech Quality Prediction
Document Type
Working Paper
Source
Subject
Computer Science - Sound
Computer Science - Machine Learning
Electrical Engineering and Systems Science - Audio and Speech Processing
I.2.7
Language
Abstract
Estimating quality of transmitted speech is known to be a non-trivial task. While traditionally, test participants are asked to rate the quality of samples; nowadays, automated methods are available. These methods can be divided into: 1) intrusive models, which use both, the original and the degraded signals, and 2) non-intrusive models, which only require the degraded signal. Recently, non-intrusive models based on neural networks showed to outperform signal processing based models. However, the advantages of deep learning based models come with the cost of being more challenging to interpret. To get more insight into the prediction models the non-intrusive speech quality prediction model NISQA is analyzed in this paper. NISQA is composed of a convolutional neural network (CNN) and a recurrent neural network (RNN). The task of the CNN is to compute relevant features for the speech quality prediction on a frame level, while the RNN models time-dependencies between the individual speech frames. Different explanation algorithms are used to understand the automatically learned features of the CNN. In this way, several interpretable features could be identified, such as the sensitivity to noise or strong interruptions. On the other hand, it was found that multiple features carry redundant information.
Comment: 4 pages, 6 figures, In Proceedings of the DAGA 2021 (the annual conference of the German Acoustical Society, DEGA)