Quantization Aware Training with Absolute-Cosine Regularization for Automatic Speech Recognition

Hieu Duy Nguyen, Anastasios Alexandridis, Athanasios Mouchtaris


Compression and quantization is important to neural networks in general and Automatic Speech Recognition (ASR) systems in particular, especially when they operate in real-time on resource-constrained devices. By using fewer number of bits for the model weights, the model size becomes much smaller while inference time is reduced significantly, with the cost of degraded performance. Such degradation can be potentially addressed by the so-called quantization-aware training (QAT). Existing QATs mostly take into account the quantization in forward propagation, while ignoring the quantization loss in gradient calculation during back-propagation. In this work, we introduce a novel QAT scheme based on absolute-cosine regularization (ACosR), which enforces a prior, quantization-friendly distribution to the model weights. We apply this novel approach into ASR task assuming a recurrent neural network transducer (RNN-T) architecture. The results show that there is zero to little degradation between floating-point, 8-bit, and 6-bit ACosR models. Weight distributions further confirm that in-training weights are very close to quantization levels when ACosR is applied.


 DOI: 10.21437/Interspeech.2020-1991

Cite as: Nguyen, H.D., Alexandridis, A., Mouchtaris, A. (2020) Quantization Aware Training with Absolute-Cosine Regularization for Automatic Speech Recognition. Proc. Interspeech 2020, 3366-3370, DOI: 10.21437/Interspeech.2020-1991.


@inproceedings{Nguyen2020,
  author={Hieu Duy Nguyen and Anastasios Alexandridis and Athanasios Mouchtaris},
  title={{Quantization Aware Training with Absolute-Cosine Regularization for Automatic Speech Recognition}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3366--3370},
  doi={10.21437/Interspeech.2020-1991},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1991}
}