Improving TTS with Corpus-Specific Pronunciation Adaptation

Marie Tahon, Raheel Qader, Gwénolé Lecorvé, Damien Lolive

Text-to-speech (TTS) systems are built on speech corpora which are labeled with carefully checked and segmented phonemes. However, phoneme sequences generated by automatic grapheme-to-phoneme converters during synthesis are usually inconsistent with those from the corpus, thus leading to poor quality synthetic speech signals. To solve this problem, the present work aims at adapting automatically generated pronunciations to the corpus. The main idea is to train corpus-specific phoneme-to-phoneme conditional random fields with a large set of linguistic, phonological, articulatory and acoustic-prosodic features. Features are first selected in cross-validation condition, then combined to produce the final best feature set. Pronunciation models are evaluated in terms of phoneme error rate and through perceptual tests. Experiments carried out on a French speech corpus show an improvement in the quality of speech synthesis when pronunciation models are included in the phonetization process. Apart from improving TTS quality, the presented pronunciation adaptation method also brings interesting perspectives in terms of expressive speech synthesis.

DOI: 10.21437/Interspeech.2016-864

Cite as

Tahon, M., Qader, R., Lecorvé, G., Lolive, D. (2016) Improving TTS with Corpus-Specific Pronunciation Adaptation. Proc. Interspeech 2016, 2831-2835.

author={Marie Tahon and Raheel Qader and Gwénolé Lecorvé and Damien Lolive},
title={Improving TTS with Corpus-Specific Pronunciation Adaptation},
booktitle={Interspeech 2016},