Abstractive Spoken Document Summarization Using Hierarchical Model with Multi-Stage Attention Diversity Optimization

Potsawee Manakul, Mark J.F. Gales, Linlin Wang


Abstractive summarization is a standard task for written documents, such as news articles. Applying summarization schemes to spoken documents is more challenging, especially in situations involving human interactions, such as meetings. Here, utterances tend not to form complete sentences and sometimes contain little information. Moreover, speech disfluencies will be present as well as recognition errors for automated systems. For current attention-based sequence-to-sequence summarization systems, these additional challenges can yield a poor attention distribution over the spoken document words and utterances, impacting performance. In this work, we propose a multi-stage method based on a hierarchical encoder-decoder model to explicitly model utterance-level attention distribution at training time; and enforce diversity at inference time using a unigram diversity term. Furthermore, multitask learning tasks including dialogue act classification and extractive summarization are incorporated. The performance of the system is evaluated on the AMI meeting corpus. The inclusion of both training and inference diversity terms improves performance, outperforming current state-of-the-art systems in terms of ROUGE scores. Additionally, the impact of ASR errors, as well as performance on the multitask learning tasks, is evaluated.


 DOI: 10.21437/Interspeech.2020-1683

Cite as: Manakul, P., Gales, M.J., Wang, L. (2020) Abstractive Spoken Document Summarization Using Hierarchical Model with Multi-Stage Attention Diversity Optimization. Proc. Interspeech 2020, 4248-4252, DOI: 10.21437/Interspeech.2020-1683.


@inproceedings{Manakul2020,
  author={Potsawee Manakul and Mark J.F. Gales and Linlin Wang},
  title={{Abstractive Spoken Document Summarization Using Hierarchical Model with Multi-Stage Attention Diversity Optimization}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4248--4252},
  doi={10.21437/Interspeech.2020-1683},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1683}
}