Effects of Training Data Variety in Generating Glottal Pulses from Acoustic Features with DNNs

Manu Airaksinen, Paavo Alku


Glottal volume velocity waveform, the acoustical excitation of voiced speech, cannot be acquired through direct measurements in normal production of continuous speech. Glottal inverse filtering (GIF), however, can be used to estimate the glottal flow from recorded speech signals. Unfortunately, the usefulness of GIF algorithms is limited since they are sensitive to noise and call for high-quality recordings. Recently, efforts have been taken to expand the use of GIF by training deep neural networks (DNNs) to learn a statistical mapping between frame-level acoustic features and glottal pulses estimated by GIF. This framework has been successfully utilized in statistical speech synthesis in the form of the GlottDNN vocoder which uses a DNN to generate glottal pulses to be used as the synthesizer’s excitation waveform. In this study, we investigate how the DNN-based generation of glottal pulses is affected by training data variety. The evaluation is done using both objective measures as well as subjective listening tests of synthetic speech. The results suggest that the performance of the glottal pulse generation with DNNs is affected particularly by how well the training corpus suits GIF: processing low-pitched male speech and sustained phonations shows better performance than processing high-pitched female voices or continuous speech.


 DOI: 10.21437/Interspeech.2017-363

Cite as: Airaksinen, M., Alku, P. (2017) Effects of Training Data Variety in Generating Glottal Pulses from Acoustic Features with DNNs. Proc. Interspeech 2017, 3946-3950, DOI: 10.21437/Interspeech.2017-363.


@inproceedings{Airaksinen2017,
  author={Manu Airaksinen and Paavo Alku},
  title={Effects of Training Data Variety in Generating Glottal Pulses from Acoustic Features with DNNs},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={3946--3950},
  doi={10.21437/Interspeech.2017-363},
  url={http://dx.doi.org/10.21437/Interspeech.2017-363}
}