Second Language Studies: Acquisition, Learning, Education and Technology

Tokyo, Japan
September 22-24, 2010

Visual Articulatory Feedback for Phonetic Correction in Second Language Learning

Pierre Badin, Atef Ben Youssef, Gérard Bailly, Frédéric Elisei, Thomas Hueber

GIPSA-lab (Département Parole & Cognition / ICP), UMR 5216 CNRS – Grenoble University, France

Orofacial clones can display speech articulation in an augmented mode, i.e. display all major speech articulators, including those usually hidden such as the tongue or the velum. Besides, a number of studies tend to show that the visual articulatory feedback provided by ElectroPalatoGraphy or ultrasound echography is useful for speech therapy. This paper describes the latest developments in acoustic-to-articulatory inversion, based on statistical models, to drive orofacial clones from speech sound. It suggests that this technology could provide a more elaborate feedback than previously available, and that it would be useful in the domain of Computer Aided Pronunciation Training.

Full Paper

Bibliographic reference.  Badin, Pierre / Youssef, Atef Ben / Bailly, Gérard / Elisei, Frédéric / Hueber, Thomas (2010): "Visual articulatory feedback for phonetic correction in second language learning", In L2WS-2010, paper P1-10.