Speech Prosody 2010

Chicago, IL, USA
May 10-14, 2010

A Joint Acoustic-Articulatory Study of Nasal Spectral Reduction in Read versus Spontaneous Speaking Styles

Vikram Ramanarayanan (1), Dani Byrd (2), Louis Goldstein (2), Shrikanth Narayanan (1,2)

(1) Signal Analysis and Interpretation Laboratory, Ming Hsieh Department of Electrical Engineering,
(2) Department of Linguistics; University of Southern California, Los Angeles, CA-90089-0899

Speech styles are one of the primary phenomena of prosodic variation in speech. We present a novel automatic procedure to analyze real-time magnetic resonance images (rt-MRI) of the human vocal tract recorded for read and spontaneously spoken speech. This is applied to rt-MRI data on nasal articulation, jointly used with acoustic analyses of the speech signal, to analyze nasal production differences in read and spontaneous speech, especially focusing on reduction. In this exploratory study, vowel-nasal-vowel (VNV) sequences from one speaker were examined and measures extracted from both acoustic and articulatory signals. Significant differences were observed in the realizations of constriction-forming events for read and spontaneous speaking styles. Such an analysis has implications for understanding speech planning and for informing design of automatic speech analysis algorithms.

Index Terms: speech production, real-time MRI, nasals, vocal tract, image motion analysis, read speech, spontaneous speech, spectral reduction.

Full Paper

Bibliographic reference.  Ramanarayanan, Vikram / Byrd, Dani / Goldstein, Louis / Narayanan, Shrikanth (2010): "A joint acoustic-articulatory study of nasal spectral reduction in read versus spontaneous speaking styles", In SP-2010, paper 226.