Hierarchical LSTMs with Joint Learning for Estimating Customer Satisfaction from Contact Center Calls

Atsushi Ando, Ryo Masumura, Hosana Kamiyama, Satoshi Kobashikawa, Yushi Aono


This paper presents a joint modeling of both turn-level and call-level customer satisfaction in contact center dialogue. Our key idea is to directly apply turn-level estimation results to call-level estimation and optimize them jointly; previous work treated both estimations as being independent. Proposed joint modeling is achieved by stacking two types of long short-term memory recurrent neural networks (LSTM-RNNs). The lower layer employs LSTM-RNN for sequential labeling of turn-level customer satisfaction in which each label is estimated from context information extracted from not only the target turn but also the surrounding turns. The upper layer uses another LSTM-RNN to estimate call-level customer satisfaction labels from all information of estimated turn-level customer satisfaction. These two networks can be efficiently optimized by joint learning of both types of labels. Experiments show that the proposed method outperforms a conventional support vector machine based method in terms of both turn-level and call-level customer satisfaction with relative error reductions of over 20%.


 DOI: 10.21437/Interspeech.2017-725

Cite as: Ando, A., Masumura, R., Kamiyama, H., Kobashikawa, S., Aono, Y. (2017) Hierarchical LSTMs with Joint Learning for Estimating Customer Satisfaction from Contact Center Calls. Proc. Interspeech 2017, 1716-1720, DOI: 10.21437/Interspeech.2017-725.


@inproceedings{Ando2017,
  author={Atsushi Ando and Ryo Masumura and Hosana Kamiyama and Satoshi Kobashikawa and Yushi Aono},
  title={Hierarchical LSTMs with Joint Learning for Estimating Customer Satisfaction from Contact Center Calls},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={1716--1720},
  doi={10.21437/Interspeech.2017-725},
  url={http://dx.doi.org/10.21437/Interspeech.2017-725}
}