Speech to Text Adaptation: Towards an Efficient Cross-Modal Distillation

Won Ik Cho, Donghyun Kwak, Ji Won Yoon, Nam Soo Kim


Speech is one of the most effective means of communication and is full of information that helps the transmission of utterer’s thoughts. However, mainly due to the cumbersome processing of acoustic features, phoneme or word posterior probability has frequently been discarded in understanding the natural language. Thus, some recent spoken language understanding (SLU) modules have utilized end-to-end structures that preserve the uncertainty information. This further reduces the propagation of speech recognition error and guarantees computational efficiency. We claim that in this process, the speech comprehension can benefit from the inference of massive pre-trained language models (LMs). We transfer the knowledge from a concrete Transformer-based text LM to an SLU module which can face a data shortage, based on recent cross-modal distillation methodologies. We demonstrate the validity of our proposal upon the performance on Fluent Speech Command, an English SLU benchmark. Thereby, we experimentally verify our hypothesis that the knowledge could be shared from the top layer of the LM to a fully speech-based module, in which the abstracted speech is expected to meet the semantic representation.


 DOI: 10.21437/Interspeech.2020-1246

Cite as: Cho, W.I., Kwak, D., Yoon, J.W., Kim, N.S. (2020) Speech to Text Adaptation: Towards an Efficient Cross-Modal Distillation. Proc. Interspeech 2020, 896-900, DOI: 10.21437/Interspeech.2020-1246.


@inproceedings{Cho2020,
  author={Won Ik Cho and Donghyun Kwak and Ji Won Yoon and Nam Soo Kim},
  title={{Speech to Text Adaptation: Towards an Efficient Cross-Modal Distillation}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={896--900},
  doi={10.21437/Interspeech.2020-1246},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1246}
}