Generative Adversarial Training Data Adaptation for Very Low-Resource Automatic Speech Recognition

Kohei Matsuura, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara


It is important to transcribe and archive speech data of endangered languages for preserving heritages of verbal culture and automatic speech recognition (ASR) is a powerful tool to facilitate this process. However, since endangered languages do not generally have large corpora with many speakers, the performance of ASR models trained on them are considerably poor in general. Nevertheless, we are often left with a lot of recordings of spontaneous speech data that have to be transcribed. In this work, for mitigating this speaker sparsity problem, we propose to convert the whole training speech data and make it sound like the test speaker in order to develop a highly accurate ASR system for this speaker. For this purpose, we utilize a CycleGAN-based non-parallel voice conversion technology to forge a labeled training data that is close to the test speaker’s speech. We evaluated this speaker adaptation approach on two low-resource corpora, namely, Ainu and Mboshi. We obtained 35–60% relative improvement in phone error rate on the Ainu corpus, and 40% relative improvement was attained on the Mboshi corpus. This approach outperformed two conventional methods namely unsupervised adaptation and multilingual training with these two corpora.


 DOI: 10.21437/Interspeech.2020-1195

Cite as: Matsuura, K., Mimura, M., Sakai, S., Kawahara, T. (2020) Generative Adversarial Training Data Adaptation for Very Low-Resource Automatic Speech Recognition. Proc. Interspeech 2020, 2737-2741, DOI: 10.21437/Interspeech.2020-1195.


@inproceedings{Matsuura2020,
  author={Kohei Matsuura and Masato Mimura and Shinsuke Sakai and Tatsuya Kawahara},
  title={{Generative Adversarial Training Data Adaptation for Very Low-Resource Automatic Speech Recognition}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2737--2741},
  doi={10.21437/Interspeech.2020-1195},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1195}
}