A Scalable Noisy Speech Dataset and Online Subjective Test Framework

Chandan K.A. Reddy, Ebrahim Beyrami, Jamie Pool, Ross Cutler, Sriram Srinivasan, Johannes Gehrke

Background noise is a major source of quality impairments in Voice over Internet Protocol (VoIP) and Public Switched Telephone Network (PSTN) calls. Recent work shows the efficacy of deep learning for noise suppression, but the datasets have been relatively small compared to those used in other domains (e.g., ImageNet) and the associated evaluations have been more focused. In order to better facilitate deep learning research in Speech Enhancement, we present a noisy speech dataset (MS-SNSD) that can scale to arbitrary sizes depending on the number of speakers, noise types, and Speech to Noise Ratio (SNR) levels desired. We show that increasing dataset sizes increases noise suppression performance as expected. In addition, we provide an open-source evaluation methodology to evaluate the results subjectively at scale using crowdsourcing, with a reference algorithm to normalize the results. To demonstrate the dataset and evaluation framework we apply it to several noise suppressors and compare the subjective Mean Opinion Score (MOS) with objective quality measures such as SNR, PESQ, POLQA, and VISQOL and show why MOS is still required. Our subjective MOS evaluation is the first large scale evaluation of Speech Enhancement algorithms that we are aware of.

 DOI: 10.21437/Interspeech.2019-3087

Cite as: Reddy, C.K., Beyrami, E., Pool, J., Cutler, R., Srinivasan, S., Gehrke, J. (2019) A Scalable Noisy Speech Dataset and Online Subjective Test Framework. Proc. Interspeech 2019, 1816-1820, DOI: 10.21437/Interspeech.2019-3087.

  author={Chandan K.A. Reddy and Ebrahim Beyrami and Jamie Pool and Ross Cutler and Sriram Srinivasan and Johannes Gehrke},
  title={{A Scalable Noisy Speech Dataset and Online Subjective Test Framework}},
  booktitle={Proc. Interspeech 2019},