Surgical Mask Detection with Deep Recurrent Phonetic Models

Philipp Klumpp, Tomás Arias-Vergara, Juan Camilo Vásquez-Correa, Paula Andrea Pérez-Toro, Florian Hönig, Elmar Nöth, Juan Rafael Orozco-Arroyave

To solve the task of surgical mask detection from audio recordings in the scope of Interspeech’s ComParE challenge, we introduce a phonetic recognizer which is able to differentiate between clear and mask samples.

A deep recurrent phoneme recognition model is first trained on spectrograms from a German corpus to learn the spectral properties of different speech sounds. Under the assumption that each phoneme sounds differently among clear and mask speech, the model is then used to compute frame-wise phonetic labels for the challenge data, including information about the presence of a surgical mask. These labels served to train a second phoneme recognition model which is finally able to differentiate between mask and clear phoneme productions. For a single utterance, we can compute a functional representation and learn a random forest classifier to detect whether a speech sample was produced with or without a mask.

Our method performed better than the baseline methods on both validation and test set. Furthermore, we could show how wearing a mask influences the speech signal. Certain phoneme groups were clearly affected by the obstruction in front of the vocal tract, while others remained almost unaffected.

 DOI: 10.21437/Interspeech.2020-1723

Cite as: Klumpp, P., Arias-Vergara, T., Vásquez-Correa, J.C., Pérez-Toro, P.A., Hönig, F., Nöth, E., Orozco-Arroyave, J.R. (2020) Surgical Mask Detection with Deep Recurrent Phonetic Models. Proc. Interspeech 2020, 2057-2061, DOI: 10.21437/Interspeech.2020-1723.

  author={Philipp Klumpp and Tomás Arias-Vergara and Juan Camilo Vásquez-Correa and Paula Andrea Pérez-Toro and Florian Hönig and Elmar Nöth and Juan Rafael Orozco-Arroyave},
  title={{Surgical Mask Detection with Deep Recurrent Phonetic Models}},
  booktitle={Proc. Interspeech 2020},