Conditional Response Augmentation for Dialogue Using Knowledge Distillation

Myeongho Jeong, Seungtaek Choi, Hojae Han, Kyungho Kim, Seung-won Hwang


This paper studies dialogue response selection task. As state-of-the-arts are neural models requiring a large training set, data augmentation is essential to overcome the sparsity of observational annotation, where one observed response is annotated as gold. In this paper, we propose counterfactual augmentation, of considering whether unobserved utterances would “counterfactually” replace the labelled response, for the given context, and augment only if that is the case. We empirically show that our pipeline improves BERT-based models in two different response selection tasks without incurring annotation overheads.


 DOI: 10.21437/Interspeech.2020-1968

Cite as: Jeong, M., Choi, S., Han, H., Kim, K., Hwang, S. (2020) Conditional Response Augmentation for Dialogue Using Knowledge Distillation. Proc. Interspeech 2020, 3890-3894, DOI: 10.21437/Interspeech.2020-1968.


@inproceedings{Jeong2020,
  author={Myeongho Jeong and Seungtaek Choi and Hojae Han and Kyungho Kim and Seung-won Hwang},
  title={{Conditional Response Augmentation for Dialogue Using Knowledge Distillation}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3890--3894},
  doi={10.21437/Interspeech.2020-1968},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1968}
}