Statistical Testing on ASR Performance via Blockwise Bootstrap

Zhe Liu, Fuchun Peng

A common question being raised in automatic speech recognition (ASR) evaluations is how reliable is an observed word error rate (WER) improvement comparing two ASR systems, where statistical hypothesis testing and confidence interval (CI) can be utilized to tell whether this improvement is real or only due to random chance. The bootstrap resampling method has been popular for such significance analysis which is intuitive and easy to use. However, this method fails in dealing with dependent data, which is prevalent in speech world — for example, ASR performance on utterances from the same speaker could be correlated. In this paper we present blockwise bootstrap approach — by dividing evaluation utterances into nonoverlapping blocks, this method resamples these blocks instead of original data. We show that the resulting variance estimator of absolute WER difference between two ASR systems is consistent under mild conditions. We also demonstrate the validity of blockwise bootstrap method on both synthetic and real-world speech data.

 DOI: 10.21437/Interspeech.2020-1338

Cite as: Liu, Z., Peng, F. (2020) Statistical Testing on ASR Performance via Blockwise Bootstrap. Proc. Interspeech 2020, 596-600, DOI: 10.21437/Interspeech.2020-1338.

  author={Zhe Liu and Fuchun Peng},
  title={{Statistical Testing on ASR Performance via Blockwise Bootstrap}},
  booktitle={Proc. Interspeech 2020},