Machine Speech Chain with One-shot Speaker Adaptation

Andros Tjandra, Sakriani Sakti, Satoshi Nakamura

In previous work, we developed a closed-loop speech chain model based on deep learning, in which the architecture enabled the automatic speech recognition (ASR) and text-to-speech synthesis (TTS) components to mutually improve their performance. This was accomplished by the two parts teaching each other using both labeled and unlabeled data. This approach could significantly improve model performance within a single-speaker speech dataset, but only a slight increase could be gained in multi-speaker tasks. Furthermore, the model is still unable to handle unseen speakers. In this paper, we present a new speech chain mechanism by integrating a speaker recognition model inside the loop. We also propose extending the capability of TTS to handle unseen speakers by implementing one-shot speaker adaptation. This enables TTS to mimic voice characteristics from one speaker to another with only a one-shot speaker sample, even from a text without any speaker information. In the speech chain loop mechanism, ASR also benefits from the ability to further learn an arbitrary speaker’s characteristics from the generated speech waveform, resulting in a significant improvement in the recognition rate.

 DOI: 10.21437/Interspeech.2018-1558

Cite as: Tjandra, A., Sakti, S., Nakamura, S. (2018) Machine Speech Chain with One-shot Speaker Adaptation. Proc. Interspeech 2018, 887-891, DOI: 10.21437/Interspeech.2018-1558.

  author={Andros Tjandra and Sakriani Sakti and Satoshi Nakamura},
  title={Machine Speech Chain with One-shot Speaker Adaptation},
  booktitle={Proc. Interspeech 2018},