6th SIGdial Workshop on Discourse and Dialogue

Lisbon, Portugal
September 2-3, 2005

Quantitative Evaluation of User Simulation Techniques for Spoken Dialogue Systems

Jost Schatzmann (1), Kallirroi Georgila (2), Steve Young (1)

(1) Engineering Department, University of Cambridge, UK
(2) School of Informatics, University of Edinburgh, Scotland, UK

The lack of suitable training and testing data is currently a major roadblock in applying machine-learning techniques to dialogue management. Stochastic modelling of real users has been suggested as a solution to this problem, but to date few of the proposed models have been quantitatively evaluated on real data. Indeed, there are no established criteria for such an evaluation. This paper presents a systematic approach to testing user simulations and assesses the most prominent domain-independent techniques using a large DARPA Communicator corpus of human-computer dialogues. We show that while recent advances have led to significant improvements in simulation quality, simple statistical metrics are still sufficient to discern synthetic from real dialogues.

Full Paper

Bibliographic reference.  Schatzmann, Jost / Georgila, Kallirroi / Young, Steve (2005): "Quantitative evaluation of user simulation techniques for spoken dialogue systems", In SIGdial6-2005, 45-54.