Task-Oriented Dialog Generation with Enhanced Entity Representation

Zhenhao He, Jiachun Wang, Jian Chen

Recent advances in neural sequence-to-sequence models have led to promising results for end-to-end task-oriented dialog generation. Such frameworks enable a decoder to retrieve knowledge from the dialog history and the knowledge base during generation. However, these models usually rely on learned word embeddings as entity representation, which is difficult to deal with the rare and unknown entities. In this work, we propose a novel enhanced entity representation (EER) to simultaneously obtain context-sensitive and structure-aware entity representation. Our proposed method enables the decoder to facilitate both the ability to fetch the relevant knowledge and the effectiveness of incorporating grounding knowledge into the dialog generation. Experimental results on two publicly available dialog datasets show that our model outperforms the state-of-the-art data-driven task-oriented dialog models. Moreover, we conduct an Out-of-Vocabulary (OOV) test to demonstrate the superiority of EER in handling common OOV problem.

 DOI: 10.21437/Interspeech.2020-1037

Cite as: He, Z., Wang, J., Chen, J. (2020) Task-Oriented Dialog Generation with Enhanced Entity Representation. Proc. Interspeech 2020, 3905-3909, DOI: 10.21437/Interspeech.2020-1037.

  author={Zhenhao He and Jiachun Wang and Jian Chen},
  title={{Task-Oriented Dialog Generation with Enhanced Entity Representation}},
  booktitle={Proc. Interspeech 2020},