Continual Learning in Automatic Speech Recognition

Samik Sadhu, Hynek Hermansky


We emulate continual learning observed in real life, where new training data, which represent new application domain, are used for gradual improvement of an Automatic Speech Recognizer (ASR) trained on old domains. The data on which the original classifier was trained is no longer required and we observe no loss of performance on the original domain. Further, on previously unseen domain, our technique appears to yield slight advantage over offline multi-condition training. The proposed learning technique is consistent with our previously studied ad hoc stream attention based multi-stream ASR.


 DOI: 10.21437/Interspeech.2020-2962

Cite as: Sadhu, S., Hermansky, H. (2020) Continual Learning in Automatic Speech Recognition. Proc. Interspeech 2020, 1246-1250, DOI: 10.21437/Interspeech.2020-2962.


@inproceedings{Sadhu2020,
  author={Samik Sadhu and Hynek Hermansky},
  title={{Continual Learning in Automatic Speech Recognition}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1246--1250},
  doi={10.21437/Interspeech.2020-2962},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2962}
}