Small-Footprint Keyword Spotting with Multi-Scale Temporal Convolution

Ximin Li, Xiaodong Wei, Xiaowei Qin


Keyword Spotting (KWS) plays a vital role in human-computer interaction for smart on-device terminals and service robots. It remains challenging to achieve the trade-off between small footprint and high accuracy for KWS task. In this paper, we explore the application of multi-scale temporal modeling to the small-footprint keyword spotting task. We propose a multi-branch temporal convolution module (MTConv), a CNN block consisting of multiple temporal convolution filters with different kernel sizes, which enriches temporal feature space. Besides, taking advantage of temporal and depthwise convolution, a temporal efficient neural network (TENet) is designed for KWS system1. Based on the purposed model, we replace standard temporal convolution layers with MTConvs that can be trained for better performance. While at the inference stage, the MTConv can be equivalently converted to the base convolution architecture, so that no extra parameters and computational costs are added compared to the base model. The results on Google Speech Command Dataset show that one of our models trained with MTConv performs the accuracy of 96.8% with only 100K parameters.


 DOI: 10.21437/Interspeech.2020-3177

Cite as: Li, X., Wei, X., Qin, X. (2020) Small-Footprint Keyword Spotting with Multi-Scale Temporal Convolution. Proc. Interspeech 2020, 1987-1991, DOI: 10.21437/Interspeech.2020-3177.


@inproceedings{Li2020,
  author={Ximin Li and Xiaodong Wei and Xiaowei Qin},
  title={{Small-Footprint Keyword Spotting with Multi-Scale Temporal Convolution}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1987--1991},
  doi={10.21437/Interspeech.2020-3177},
  url={http://dx.doi.org/10.21437/Interspeech.2020-3177}
}