Increasing Recall of Lengthening Detection via Semi-Automatic Classification

Simon Betz, Jana Voße, Sina Zarrieß, Petra Wagner


Lengthening is the ideal hesitation strategy for synthetic speech and dialogue systems: it is unobtrusive and hard to notice, because it occurs frequently in everyday speech before phrase boundaries, in accentuation, and in hesitation. Despite its elusiveness, it allows valuable extra time for computing or information highlighting in incremental spoken dialogue systems. The elusiveness of the matter, however, poses a challenge for extracting lengthening instances from corpus data: we suspect a recall problem, as human annotators might not be able to consistently label lengthening instances. We address this issue by filtering corpus data for instances of lengthening, using a simple classification method, based on a threshold for normalized phone duration. The output is then manually labeled for disfluency. This is compared to an existing, fully manual disfluency annotation, showing that recall is significantly higher with semi-automatic pre-classification. This shows that it is inevitable to use semi-automatic pre-selection to gather enough candidate data points for manual annotation and subsequent lengthening analyses. Also, it is desirable to further increase the performance of the automatic classification. We evaluate in detail human versus semi-automatic annotation and train another classifier on the resulting dataset to check the integrity of the disfluent – non-disfluent distinction.


 DOI: 10.21437/Interspeech.2017-1528

Cite as: Betz, S., Voße, J., Zarrieß, S., Wagner, P. (2017) Increasing Recall of Lengthening Detection via Semi-Automatic Classification. Proc. Interspeech 2017, 1084-1088, DOI: 10.21437/Interspeech.2017-1528.


@inproceedings{Betz2017,
  author={Simon Betz and Jana Voße and Sina Zarrieß and Petra Wagner},
  title={Increasing Recall of Lengthening Detection via Semi-Automatic Classification},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={1084--1088},
  doi={10.21437/Interspeech.2017-1528},
  url={http://dx.doi.org/10.21437/Interspeech.2017-1528}
}