This paper proposes a novel framework that enables us to manipulate and control formants in HMM-based speech synthesis. In this framework, the dependency between formants and spectral features is modelled by piecewise linear transforms; formant pa- rameters are effectively mapped by these to the means of Gaussian distributions over the spectral synthesis parameters. The spectral envelope features generated under the influence of formants in this way may then be passed to high-quality vocoders to generate the speech waveform. This provides two major advantages over conventional frameworks. First, we can achieve spectral modification by changing formants only in those parts where we want control, whereas the user must specify all formants manually in conventional formant synthesisers (e.g. Klatt). Second, this can produce high-quality speech. Our results show the proposed method can control vowels in the synthesized speech by manipulating F1 and F2 without any degradation in synthesis quality.
Bibliographic reference. Lei, Ming / Yamagishi, Junichi / Richmond, Korin / Ling, Zhen-Hua / King, Simon / Dai, Li-Rong (2011): "Formant-controlled HMM-based speech synthesis", In INTERSPEECH-2011, 2777-2780.