Auditory-Visual Speech Processing (AVSP) 2010

Hakone, Kanagawa, Japan
September 30-October 3, 2010

Cascading Appearance-Based Features for Visual Voice Activity Detection

Rajitha Navarathna (1), David Dean (1), Patrick Lucey (1,2), Sridha Sridharan (1), Clinton Fookes (1)

(1) Speech, Audio, Image and Video Technology Lab, Queensland University of Technology, Australia
(2) Robotics Institute, Carnegie Mellon University, Department of Psychology, University of Pittsburgh, USA

The detection of voice activity is a challenging problem, especially when the level of acoustic noise is high. Most current approaches only utilise the audio signal, making them susceptible to acoustic noise. An obvious approach to overcome this is to use the visual modality. The current state-of-the-art visual feature extraction technique is one that uses a cascade of visual features (i.e. 2D-DCT, feature mean normalisation, interstep LDA). In this paper, we investigate the effectiveness of this technique for the task of visual voice activity detection (VAD), and analyse each stage of the cascade and quantify the relative improvement in performance gained by each successive stage. The experiments were conducted on the CUAVE database and our results highlight that the dynamics of the visual modality can be used to good effect to improve visual voice activity detection performance.

Index Terms: visual speech, voice activity detection, CUAVE database, static features, dynamic features

Full Paper

Bibliographic reference.  Navarathna, Rajitha / Dean, David / Lucey, Patrick / Sridharan, Sridha / Fookes, Clinton (2010): "Cascading appearance-based features for visual voice activity detection", In AVSP-2010, paper S1-1.