On the Robustness and Training Dynamics of Raw Waveform Models

Erfan Loweimi, Peter Bell, Steve Renals


We investigate the robustness and training dynamics of raw waveform acoustic models for automatic speech recognition (ASR). It is known that the first layer of such models learn a set of filters, performing a form of time-frequency analysis. This layer is liable to be under-trained owing to gradient vanishing, which can negatively affect the network performance. Through a set of experiments on TIMIT, Aurora-4 and WSJ datasets, we investigate the training dynamics of the first layer by measuring the evolution of its average frequency response over different epochs. We demonstrate that the network efficiently learns an optimal set of filters with a high spectral resolution and the dynamics of the first layer highly correlates with the dynamics of the cross entropy (CE) loss and word error rate (WER). In addition, we study the robustness of raw waveform models in both matched and mismatched conditions. The accuracy of these models is found to be comparable to, or better than, their MFCC-based counterparts in matched conditions and notably improved by using a better alignment. The role of raw waveform normalisation was also examined and up to 4.3% absolute WER reduction in mismatched conditions was achieved.


 DOI: 10.21437/Interspeech.2020-0017

Cite as: Loweimi, E., Bell, P., Renals, S. (2020) On the Robustness and Training Dynamics of Raw Waveform Models. Proc. Interspeech 2020, 1001-1005, DOI: 10.21437/Interspeech.2020-0017.


@inproceedings{Loweimi2020,
  author={Erfan Loweimi and Peter Bell and Steve Renals},
  title={{On the Robustness and Training Dynamics of Raw Waveform Models}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1001--1005},
  doi={10.21437/Interspeech.2020-0017},
  url={http://dx.doi.org/10.21437/Interspeech.2020-0017}
}