Exemplar-based Speech Waveform Generation

Oliver Watts, Cassia Valentini-Botinhao, Felipe Espic, Simon King

This paper presents a simple but effective method for generating speech waveforms by selecting small units of stored speech to match a low-dimensional target representation. The method is designed as a drop-in replacement for the vocoder in a deep neural network-based text-to-speech system. Most previous work on hybrid unit selection waveform generation relies on phonetic annotation for determining unit boundaries, or for specifying target cost, or for candidate preselection. In contrast, our waveform generator requires no phonetic information, annotation, or alignment. Unit boundaries are determined by epochs and spectral analysis provides representations which are compared directly with target features at runtime. As in unit selection, we minimise a combination of target cost and join cost, but find that greedy left-to-right nearest-neighbour search gives similar results to dynamic programming. The method is fast and can generate the waveform incrementally. We use publicly available data and provide a permissively-licensed open source toolkit for reproducing our results.

 DOI: 10.21437/Interspeech.2018-1857

Cite as: Watts, O., Valentini-Botinhao, C., Espic, F., King, S. (2018) Exemplar-based Speech Waveform Generation. Proc. Interspeech 2018, 2022-2026, DOI: 10.21437/Interspeech.2018-1857.

  author={Oliver Watts and Cassia Valentini-Botinhao and Felipe Espic and Simon King},
  title={Exemplar-based Speech Waveform Generation},
  booktitle={Proc. Interspeech 2018},