Multilingual Speech Recognition with Corpus Relatedness Sampling

Xinjian Li, Siddharth Dalmia, Alan W. Black, Florian Metze

Multilingual acoustic models have been successfully applied to low-resource speech recognition. Most existing works have combined many small corpora together, and pretrained a multilingual model by sampling from each corpus uniformly. The model is eventually fine-tuned on each target corpus. This approach, however, fails to exploit the relatedness and similarity among corpora in the training set. For example, the target corpus might benefit more from a corpus in the same domain or a corpus from a close language. In this work, we propose a simple but useful sampling strategy to take advantage of this relatedness. We first compute the corpus-level embeddings and estimate the similarity between each corpus. Next we start training the multilingual model with uniform-sampling from each corpus at first, then we gradually increase the probability to sample from related corpora based on its similarity with the target corpus. Finally the model would be fine-tuned automatically on the target corpus. Our sampling strategy outperforms the baseline multilingual model on 16 low-resource tasks. Additionally, we demonstrate that our corpus embeddings capture the language and domain information of each corpus.

 DOI: 10.21437/Interspeech.2019-3052

Cite as: Li, X., Dalmia, S., Black, A.W., Metze, F. (2019) Multilingual Speech Recognition with Corpus Relatedness Sampling. Proc. Interspeech 2019, 2120-2124, DOI: 10.21437/Interspeech.2019-3052.

  author={Xinjian Li and Siddharth Dalmia and Alan W. Black and Florian Metze},
  title={{Multilingual Speech Recognition with Corpus Relatedness Sampling}},
  booktitle={Proc. Interspeech 2019},