Massively Multilingual ASR: 50 Languages, 1 Model, 1 Billion Parameters

Vineel Pratap, Anuroop Sriram, Paden Tomasello, Awni Hannun, Vitaliy Liptchinsky, Gabriel Synnaeve, Ronan Collobert


We study training a single acoustic model for multiple languages with the aim of improving automatic speech recognition (ASR) performance on low-resource languages, and overall simplifying deployment of ASR systems that support diverse languages. We perform an extensive benchmark on 51 languages, with varying amount of training data by language (from 100 hours to 1100 hours). We compare three variants of multilingual training from a single joint model without knowing the input language, to using this information, to multiple heads (one per language “cluster”). We show that multilingual training of ASR models on several languages can improve recognition performance, in particular, on low resource languages. We see 20.9%, 23% and 28.8% average WER relative reduction compared to monolingual baselines on joint model, joint model with language input and multi head model respectively. To our knowledge, this is the first work studying multilingual ASR at massive scale, with more than 50 languages and more than 16,000 hours of audio across them.


 DOI: 10.21437/Interspeech.2020-2831

Cite as: Pratap, V., Sriram, A., Tomasello, P., Hannun, A., Liptchinsky, V., Synnaeve, G., Collobert, R. (2020) Massively Multilingual ASR: 50 Languages, 1 Model, 1 Billion Parameters. Proc. Interspeech 2020, 4751-4755, DOI: 10.21437/Interspeech.2020-2831.


@inproceedings{Pratap2020,
  author={Vineel Pratap and Anuroop Sriram and Paden Tomasello and Awni Hannun and Vitaliy Liptchinsky and Gabriel Synnaeve and Ronan Collobert},
  title={{Massively Multilingual ASR: 50 Languages, 1 Model, 1 Billion Parameters}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4751--4755},
  doi={10.21437/Interspeech.2020-2831},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2831}
}