End-to-End Spoken Language Understanding Without Full Transcripts

Hong-Kwang J. Kuo, Zoltán Tüske, Samuel Thomas, Yinghui Huang, Kartik Audhkhasi, Brian Kingsbury, Gakuto Kurata, Zvi Kons, Ron Hoory, Luis Lastras


An essential component of spoken language understanding (SLU) is slot filling: representing the meaning of a spoken utterance using semantic entity labels. In this paper, we develop end-to-end (E2E) spoken language understanding systems that directly convert speech input to semantic entities and investigate if these E2E SLU models can be trained solely on semantic entity annotations without word-for-word transcripts. Training such models is very useful as they can drastically reduce the cost of data collection. We created two types of such speech-to-entities models, a CTC model and an attention-based encoder-decoder model, by adapting models trained originally for speech recognition. Given that our experiments involve speech input, these systems need to recognize both the entity label and words representing the entity value correctly. For our speech-to-entities experiments on the ATIS corpus, both the CTC and attention models showed impressive ability to skip non-entity words: there was little degradation when trained on just entities versus full transcripts. We also explored the scenario where the entities are in an order not necessarily related to spoken order in the utterance. With its ability to do re-ordering, the attention model did remarkably well, achieving only about 2% degradation in speech-to-bag-of-entities F1 score.


 DOI: 10.21437/Interspeech.2020-2924

Cite as: Kuo, H.J., Tüske, Z., Thomas, S., Huang, Y., Audhkhasi, K., Kingsbury, B., Kurata, G., Kons, Z., Hoory, R., Lastras, L. (2020) End-to-End Spoken Language Understanding Without Full Transcripts. Proc. Interspeech 2020, 906-910, DOI: 10.21437/Interspeech.2020-2924.


@inproceedings{Kuo2020,
  author={Hong-Kwang J. Kuo and Zoltán Tüske and Samuel Thomas and Yinghui Huang and Kartik Audhkhasi and Brian Kingsbury and Gakuto Kurata and Zvi Kons and Ron Hoory and Luis Lastras},
  title={{End-to-End Spoken Language Understanding Without Full Transcripts}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={906--910},
  doi={10.21437/Interspeech.2020-2924},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2924}
}