4th International Conference on Spoken Language Processing

Philadelphia, PA, USA
October 3-6, 1996

Characterizing Audiovisual Information During Speech

E. Vatikiotis-Bateson (1), K. G. Munhall (2), Y. Kasahara (3), F. Garcia (1), H. Yehia (1)

(1) ATR Human Information Processing Res. Labs., Kyoto, Japan
(2) Queen’s University, Kingston, Canada
(3) Waseda University, Tokyo, Japan

In this paper, several analyses relating facial motion with perioral muscle behavior and speech acoustics are described. The results suggest that linguistically relevant visual information is distributed over large regions of the face and can be modeled from the same control source as the acoustics.

Full Paper

Bibliographic reference.  Vatikiotis-Bateson, E. / Munhall, K. G. / Kasahara, Y. / Garcia, F. / Yehia, H. (1996): "Characterizing audiovisual information during speech", In ICSLP-1996, 1485-1488.