A Deep 2D Convolutional Network for Waveform-Based Speech Recognition

Dino Oglic, Zoran Cvetkovic, Peter Bell, Steve Renals

Due to limited computational resources, acoustic models of early automatic speech recognition ( asr) systems were built in low-dimensional feature spaces that incur considerable information loss at the outset of the process. Several comparative studies of automatic and human speech recognition suggest that this information loss can adversely affect the robustness of asr systems. To mitigate that and allow for learning of robust models, we propose a deep 2 d convolutional network in the waveform domain. The first layer of the network decomposes waveforms into frequency sub-bands, thereby representing them in a structured high-dimensional space. This is achieved by means of a parametric convolutional block defined via cosine modulations of compactly supported windows. The next layer embeds the waveform in an even higher-dimensional space of high-resolution spectro-temporal patterns, implemented via a 2 d convolutional block. This is followed by a gradual compression phase that selects most relevant spectro-temporal patterns using wide-pass 2 d filtering. Our results show that the approach significantly outperforms alternative waveform-based models on both noisy and spontaneous conversational speech (24% and 11% relative error reduction, respectively). Moreover, this study provides empirical evidence that learning directly from the waveform domain could be more effective than learning using hand-crafted features.

 DOI: 10.21437/Interspeech.2020-1870

Cite as: Oglic, D., Cvetkovic, Z., Bell, P., Renals, S. (2020) A Deep 2D Convolutional Network for Waveform-Based Speech Recognition. Proc. Interspeech 2020, 1654-1658, DOI: 10.21437/Interspeech.2020-1870.

  author={Dino Oglic and Zoran Cvetkovic and Peter Bell and Steve Renals},
  title={{A Deep 2D Convolutional Network for Waveform-Based Speech Recognition}},
  booktitle={Proc. Interspeech 2020},