PyChain: A Fully Parallelized PyTorch Implementation of LF-MMI for End-to-End ASR

Yiwen Shao, Yiming Wang, Daniel Povey, Sanjeev Khudanpur


We present PyChain, a fully parallelized PyTorch implementation of end-to-end lattice-free maximum mutual information (LF-MMI) training for the so-called chain models in the Kaldi automatic speech recognition (ASR) toolkit. Unlike other PyTorch and Kaldi based ASR toolkits, PyChain is designed to be as flexible and light-weight as possible so that it can be easily plugged into new ASR projects, or other existing PyTorch-based ASR tools, as exemplified respectively by a new project PyChain-example, and Espresso, an existing end-to-end ASR toolkit. PyChain’s efficiency and flexibility is demonstrated through such novel features as full GPU training on numerator/denominator graphs, and support for unequal length sequences. Experiments on the WSJ dataset show that with simple neural networks and commonly used machine learning techniques, PyChain can achieve competitive results that are comparable to Kaldi and better than other end-to-end ASR systems.


 DOI: 10.21437/Interspeech.2020-3053

Cite as: Shao, Y., Wang, Y., Povey, D., Khudanpur, S. (2020) PyChain: A Fully Parallelized PyTorch Implementation of LF-MMI for End-to-End ASR. Proc. Interspeech 2020, 561-565, DOI: 10.21437/Interspeech.2020-3053.


@inproceedings{Shao2020,
  author={Yiwen Shao and Yiming Wang and Daniel Povey and Sanjeev Khudanpur},
  title={{ PyChain: A Fully Parallelized PyTorch Implementation of LF-MMI for End-to-End ASR}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={561--565},
  doi={10.21437/Interspeech.2020-3053},
  url={http://dx.doi.org/10.21437/Interspeech.2020-3053}
}