Explicit distortion modelling using DDSP to simulate air traffic control(ATC) noisy speech
The performances of automatic speech recognition (ASR) systems degrade drastically under noisy conditions. Explicit distortion modelling (EDM), as a feature compensation step, is able to enhance ASR systems under such conditions by simu- lating in-domain noisy speeches from the clean counterparts. However, existing distortion models are either non-trainable or unexplainable and often lack controllability and general- ization ability. In this paper, we propose a fully explainable and controllable model: DENT-DDSP to achieve EDM. DENT- DDSP utilizes trainable differentiable digital signal processing (DDSP) components and requires only 10 seconds of training data to achieve high fidelity. The experiment shows that the simulated noisy data from DENT-DDSP achieves the best sim- ulation quality compared to other static or GAN-based distor- tion models in terms of multi-scale spectral loss (i.e., MSSL). Furthermore, a downstream ASR task is designed to evaluate whether the simulated noisy data can be utilized for ASR and achieve similar performances using the real noisy data. The experiment shows that the ASR model trained with simulated data from DENT-DDSP achieves the lowest word to error rate (i.e., WER) among all distortion models and has achieved com- parable performances to the upper-bound performance model trained with in-domain real noisy data in terms of WER.
Speech Samples
Comparison between clean speech, real noisy speech, simulated noisy speech from different distortion models: DENT-DDSP, SimuGAN, g726, codec2. Note that for g726, codec2 output, the background noise are collected from real noisy speech data. Whereas, the background noise of DENT-DDSP and SimuGAN are simulated.
ID | Clean speech | Real noisy speech | DENT-DDSP simulated | SimuGAN simulated | G726 simulated | Codec2 simulated |
---|---|---|---|---|---|---|
1 | ||||||
2 | ||||||
3 | ||||||
4 |
Generalization ability
Same set of non-speech data are used to test the generalization ability of 2 trainable distortion models: DENT-DDSP and SimuGAN. The non-speech data includes: 1.ambient noise, 2.guitar, 3. piano 4. synth audio1 5. synth audio2. Simulation quality can be reflected from 1. noise spectal characteristics 2. audio spectral characteristics(distorted, high frequency being dampened)
ID | Clean audio(non-speech) | DENT-Simulated | SimuGAN-Simulated |
---|---|---|---|
ambient noise | |||
guitar | |||
piano | |||
synth audio1 | |||
synth audio2 |