Transformer VQ-VAE for Unsupervised Unit Discovery and Speech Synthesis: ZeroSpeech 2020 Challenge

Andros Tjandra, Sakriani Sakti, Satoshi Nakamura


In this paper, we report our submitted system for the ZeroSpeech 2020 challenge on Track 2019. The main theme in this challenge is to build a speech synthesizer without any textual information or phonetic labels. In order to tackle those challenges, we build a system that must address two major components such as 1) given speech audio, extract subword units in an unsupervised way and 2) re-synthesize the audio from novel speakers. The system also needs to balance the codebook performance between the ABX error rate and the bitrate compression rate. Our main contribution here is we proposed Transformer-based VQ-VAE for unsupervised unit discovery and Transformer-based inverter for the speech synthesis given the extracted codebook. Additionally, we also explored several regularization methods to improve performance even further.


 DOI: 10.21437/Interspeech.2020-3033

Cite as: Tjandra, A., Sakti, S., Nakamura, S. (2020) Transformer VQ-VAE for Unsupervised Unit Discovery and Speech Synthesis: ZeroSpeech 2020 Challenge. Proc. Interspeech 2020, 4851-4855, DOI: 10.21437/Interspeech.2020-3033.


@inproceedings{Tjandra2020,
  author={Andros Tjandra and Sakriani Sakti and Satoshi Nakamura},
  title={{Transformer VQ-VAE for Unsupervised Unit Discovery and Speech Synthesis: ZeroSpeech 2020 Challenge}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4851--4855},
  doi={10.21437/Interspeech.2020-3033},
  url={http://dx.doi.org/10.21437/Interspeech.2020-3033}
}