Twin Regularization for Online Speech Recognition

Mirco Ravanelli, Dmitriy Serdyuk, Yoshua Bengio

Online speech recognition is crucial for developing natural human-machine interfaces. This modality, however, is significantly more challenging than off-line ASR, since real-time/low-latency constraints inevitably hinder the use of future information, that is known to be very helpful to perform robust predictions. A popular solution to mitigate this issue consists of feeding neural acoustic models with context windows that gather some future frames. This introduces a latency which depends on the number of employed look-ahead features. This paper explores a different approach, based on estimating the future rather than waiting for it. Our technique encourages the hidden representations of a unidirectional recurrent network to embed some useful information about the future. Inspired by a recently proposed technique called Twin Networks, we add a regularization term that forces forward hidden states to be as close as possible to cotemporal backward ones, computed by a "twin" neural network running backwards in time. The experiments, conducted on a number of datasets, recurrent architectures, input features and acoustic conditions, have shown the effectiveness of this approach. One important advantage is that our method does not introduce any additional computation at test time if compared to standard unidirectional recurrent networks.

 DOI: 10.21437/Interspeech.2018-1407

Cite as: Ravanelli, M., Serdyuk, D., Bengio, Y. (2018) Twin Regularization for Online Speech Recognition. Proc. Interspeech 2018, 3718-3722, DOI: 10.21437/Interspeech.2018-1407.

  author={Mirco Ravanelli and Dmitriy Serdyuk and Yoshua Bengio},
  title={Twin Regularization for Online Speech Recognition},
  booktitle={Proc. Interspeech 2018},