Neural Zero-Inflated Quality Estimation Model for Automatic Speech Recognition System

Kai Fan, Bo Li, Jiayi Wang, Shiliang Zhang, Boxing Chen, Niyu Ge, Zhijie Yan


The performances of automatic speech recognition (ASR) systems are usually evaluated by the metric word error rate (WER) when the manually transcribed data are provided, which are, however, expensively available in the real scenario. In addition, the empirical distribution of WER for most ASR systems usually tends to put a significant mass near zero, making it difficult to simulate with a single continuous distribution. In order to address the two issues of ASR quality estimation (QE), we propose a novel neural zero-inflated model to predict the WER of the ASR result without transcripts. We design a neural zero-inflated beta regression on top of a bidirectional transformer language model conditional on speech features (speech-BERT). We adopt the pre-training strategy of token level masked language modeling for speech-BERT as well, and further fine-tune with our zero-inflated layer for the mixture of discrete and continuous outputs. The experimental results show that our approach achieves better performance on WER prediction compared with strong baselines.


 DOI: 10.21437/Interspeech.2020-1881

Cite as: Fan, K., Li, B., Wang, J., Zhang, S., Chen, B., Ge, N., Yan, Z. (2020) Neural Zero-Inflated Quality Estimation Model for Automatic Speech Recognition System. Proc. Interspeech 2020, 606-610, DOI: 10.21437/Interspeech.2020-1881.


@inproceedings{Fan2020,
  author={Kai Fan and Bo Li and Jiayi Wang and Shiliang Zhang and Boxing Chen and Niyu Ge and Zhijie Yan},
  title={{Neural Zero-Inflated Quality Estimation Model for Automatic Speech Recognition System}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={606--610},
  doi={10.21437/Interspeech.2020-1881},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1881}
}