Cross-Lingual Multi-Task Neural Architecture for Spoken Language Understanding

Yujiang Li, Xuemin Zhao, Weiqun Xu, Yonghong Yan

Cross-lingual spoken language understanding (SLU) systems traditionally require machine translation services for language portability and liberation from human supervision. However, restriction exists in parallel corpora and model architectures. Assuming reliable data are provided with human-supervision, which encourages non-parallel corpora and alleviate translation errors, this paper aims to explore cross-lingual knowledge transfer from multiple levels by taking advantage of neural architectures. We first investigate a joint model of slot filling and intent determination for SLU, which alleviates the out-of-vocabulary problem and explicitly models dependencies between output labels by combining character and word representations, bidirectional Long Short-Term Memory and conditional random fields together, while attention-based classifier is introduced for intent determination. Knowledge transfer is further operated on character-level and sequence-level, aiming to share morphological and phonological information between languages with similar alphabets by sharing character representations and characterize the sequence with language-general and language-specific knowledge adaptively acquired by separate encoders. Experimental results on the MIT-Restaurant-Corpus and the ATIS corpora in different languages demonstrate the effectiveness of the proposed methods.

 DOI: 10.21437/Interspeech.2018-1039

Cite as: Li, Y., Zhao, X., Xu, W., Yan, Y. (2018) Cross-Lingual Multi-Task Neural Architecture for Spoken Language Understanding. Proc. Interspeech 2018, 566-570, DOI: 10.21437/Interspeech.2018-1039.

  author={Yujiang Li and Xuemin Zhao and Weiqun Xu and Yonghong Yan},
  title={Cross-Lingual Multi-Task Neural Architecture for Spoken Language Understanding},
  booktitle={Proc. Interspeech 2018},