Style Attuned Pre-Training and Parameter Efficient Fine-Tuning for Spoken Language Understanding

Jin Cao, Jun Wang, Wael Hamza, Kelly Vanee, Shang-Wen Li


Neural models have yielded state-of-the-art results in deciphering spoken language understanding (SLU) problems; however, these models require a significant amount of domain-specific labeled examples for training, which is prohibitively expensive. While pre-trained language models like BERT have been shown to capture a massive amount of knowledge by learning from unlabeled corpora and solve SLU using fewer labeled examples for adaption, the encoding of knowledge is implicit and agnostic to downstream tasks. Such encoding results in model inefficiencies in parameter usage: an entirely new model is required for every domain. To address these challenges, we introduce a novel SLU framework, comprising a conversational language modeling (CLM) pre-training task and a light encoder architecture. The CLM pre-training enables networks to capture the representation of the language in conversation style with the presence of ASR errors. The light encoder architecture separates the shared pre-trained networks from the mappings of generally encoded knowledge to specific domains of SLU, allowing for the domain adaptation to be performed solely at the light encoder and thus increasing efficiency. With the framework, we match the performance of state-of-the-art SLU results on Alexa internal datasets and on two public ones (ATIS, SNIPS), adding only 4.4% parameters per task.


 DOI: 10.21437/Interspeech.2020-2907

Cite as: Cao, J., Wang, J., Hamza, W., Vanee, K., Li, S. (2020) Style Attuned Pre-Training and Parameter Efficient Fine-Tuning for Spoken Language Understanding. Proc. Interspeech 2020, 1570-1574, DOI: 10.21437/Interspeech.2020-2907.


@inproceedings{Cao2020,
  author={Jin Cao and Jun Wang and Wael Hamza and Kelly Vanee and Shang-Wen Li},
  title={{Style Attuned Pre-Training and Parameter Efficient Fine-Tuning for Spoken Language Understanding}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1570--1574},
  doi={10.21437/Interspeech.2020-2907},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2907}
}