SpeechBERT: An Audio-and-Text Jointly Learned Language Model for End-to-End Spoken Question Answering

Yung-Sung Chuang, Chi-Liang Liu, Hung-yi Lee, Lin-shan Lee


While various end-to-end models for spoken language understanding tasks have been explored recently, this paper is probably the first known attempt to challenge the very difficult task of end-to-end spoken question answering (SQA). Learning from the very successful BERT model for various text processing tasks, here we proposed an audio-and-text jointly learned SpeechBERT model. This model outperformed the conventional approach of cascading ASR with the following text question answering (TQA) model on datasets including ASR errors in answer spans, because the end-to-end model was shown to be able to extract information out of audio data before ASR produced errors. When ensembling the proposed end-to-end model with the cascade architecture, even better performance was achieved. In addition to the potential of end-to-end SQA, the SpeechBERT can also be considered for many other spoken language understanding tasks just as BERT for many text processing tasks.


 DOI: 10.21437/Interspeech.2020-1570

Cite as: Chuang, Y., Liu, C., Lee, H., Lee, L. (2020) SpeechBERT: An Audio-and-Text Jointly Learned Language Model for End-to-End Spoken Question Answering. Proc. Interspeech 2020, 4168-4172, DOI: 10.21437/Interspeech.2020-1570.


@inproceedings{Chuang2020,
  author={Yung-Sung Chuang and Chi-Liang Liu and Hung-yi Lee and Lin-shan Lee},
  title={{SpeechBERT: An Audio-and-Text Jointly Learned Language Model for End-to-End Spoken Question Answering}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4168--4172},
  doi={10.21437/Interspeech.2020-1570},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1570}
}