Improving End-to-End Speech-to-Intent Classification with Reptile

Yusheng Tian, Philip John Gorinski


End-to-end spoken language understanding (SLU) systems have many advantages over conventional pipeline systems, but collecting in-domain speech data to train an end-to-end system is costly and time consuming. One question arises from this: how to train an end-to-end SLU with limited amounts of data? Many researchers have explored approaches that make use of other related data resources, typically by pre-training parts of the model on high-resource speech recognition. In this paper, we suggest improving the generalization performance of SLU models with a non-standard learning algorithm, Reptile. Though Reptile was originally proposed for model-agnostic meta learning, we argue that it can also be used to directly learn a target task and result in better generalization than conventional gradient descent. In this work, we employ Reptile to the task of end-to-end spoken intent classification. Experiments on four datasets of different languages and domains show improvement of intent prediction accuracy, both when Reptile is used alone and used in addition to pre-training.


 DOI: 10.21437/Interspeech.2020-1160

Cite as: Tian, Y., Gorinski, P.J. (2020) Improving End-to-End Speech-to-Intent Classification with Reptile. Proc. Interspeech 2020, 891-895, DOI: 10.21437/Interspeech.2020-1160.


@inproceedings{Tian2020,
  author={Yusheng Tian and Philip John Gorinski},
  title={{Improving End-to-End Speech-to-Intent Classification with Reptile}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={891--895},
  doi={10.21437/Interspeech.2020-1160},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1160}
}