Learnable Spectro-Temporal Receptive Fields for Robust Voice Type Discrimination

Tyler Vuong, Yangyang Xia, Richard M. Stern


Voice Type Discrimination (VTD) refers to discrimination between regions in a recording where speech was produced by speakers that are physically within proximity of the recording device (“Live Speech”) from speech and other types of audio that were played back such as traffic noise and television broadcasts (“Distractor Audio”). In this work, we propose a deep-learning-based VTD system that features an initial layer of learnable spectro-temporal receptive fields (STRFs). Our approach is also shown to provide very strong performance on a similar spoofing detection task in the ASVspoof 2019 challenge. We evaluate our approach on a new standardized VTD database that was collected to support research in this area. In particular, we study the effect of using learnable STRFs compared to static STRFs or unconstrained kernels. We also show that our system consistently improves a competitive baseline system across a wide range of signal-to-noise ratios on spoofing detection in the presence of VTD distractor noise.


 DOI: 10.21437/Interspeech.2020-1878

Cite as: Vuong, T., Xia, Y., Stern, R.M. (2020) Learnable Spectro-Temporal Receptive Fields for Robust Voice Type Discrimination. Proc. Interspeech 2020, 1957-1961, DOI: 10.21437/Interspeech.2020-1878.


@inproceedings{Vuong2020,
  author={Tyler Vuong and Yangyang Xia and Richard M. Stern},
  title={{Learnable Spectro-Temporal Receptive Fields for Robust Voice Type Discrimination}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1957--1961},
  doi={10.21437/Interspeech.2020-1878},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1878}
}