Pretrained Semantic Speech Embeddings for End-to-End Spoken Language Understanding via Cross-Modal Teacher-Student Learning

Pavel Denisov, Ngoc Thang Vu


Spoken language understanding is typically based on pipeline architectures including speech recognition and natural language understanding steps. These components are optimized independently to allow usage of available data, but the overall system suffers from error propagation. In this paper, we propose a novel training method that enables pretrained contextual embeddings to process acoustic features. In particular, we extend it with an encoder of pretrained speech recognition systems in order to construct end-to-end spoken language understanding systems. Our proposed method is based on the teacher-student framework across speech and text modalities that aligns the acoustic and the semantic latent spaces. Experimental results in three benchmarks show that our system reaches the performance comparable to the pipeline architecture without using any training data and outperforms it after fine-tuning with ten examples per class on two out of three benchmarks.


 DOI: 10.21437/Interspeech.2020-2456

Cite as: Denisov, P., Vu, N.T. (2020) Pretrained Semantic Speech Embeddings for End-to-End Spoken Language Understanding via Cross-Modal Teacher-Student Learning. Proc. Interspeech 2020, 881-885, DOI: 10.21437/Interspeech.2020-2456.


@inproceedings{Denisov2020,
  author={Pavel Denisov and Ngoc Thang Vu},
  title={{Pretrained Semantic Speech Embeddings for End-to-End Spoken Language Understanding via Cross-Modal Teacher-Student Learning}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={881--885},
  doi={10.21437/Interspeech.2020-2456},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2456}
}