Sixth ISCA Workshop on Speech Synthesis

Bonn, Germany
August 22-24, 2007

Perceptual Annotation of Expressive Speech

Lijuan Wang (1), Min Chu (1), Yaya Peng (2), Yong Zhao (1), Frank K. Soong (1)

(1) Microsoft Research Asia, Beijing, China
(2) Department of Linguistics & Modern Languages, The Chinese University of Hong Kong, China

A six-dimensioned label set for annotating expressiveness of speech samples is proposed. Unlike conventional emotional annotation labels that require annotators to make rather difficult judgments on speakers' emotional (high-level) status, the new annotation set of six low-level labels, i.e., "pitch", "vocal effort", "voice age", "loudness", "speaking rate", and "speaking manner" can be more easily labeled by non-experts. 800 expressive utterances were annotated by four annotators with the proposed labels. The labeling also shows a good consistency (71%) among the annotators. The proposed six labels capture the different styles (expressiveness) well in the audio-book. The difference between styles, measured by the intensity of styles along the six labels, is highly correlated (0.85) with the perceptual distance obtained from a subjective AB test. A compact classification and regression tree (CART) is built to automatically group sentences of similar expressiveness into several "pure" speaking styles. The interpretation of each speaking style can be explicitly understood from the CART structure.

Full Paper

Bibliographic reference.  Wang, Lijuan / Chu, Min / Peng, Yaya / Zhao, Yong / Soong, Frank K. (2007): "Perceptual annotation of expressive speech", In SSW6-2007, 46-51.