Auditory-Visual Speech Processing (AVSP) 2010
Hakone, Kanagawa, Japan
The detection of voice activity is a challenging problem, especially when the level of acoustic noise is high. Most current approaches only utilise the audio signal, making them susceptible to acoustic noise. An obvious approach to overcome this is to use the visual modality. The current state-of-the-art visual feature extraction technique is one that uses a cascade of visual features (i.e. 2D-DCT, feature mean normalisation, interstep LDA). In this paper, we investigate the effectiveness of this technique for the task of visual voice activity detection (VAD), and analyse each stage of the cascade and quantify the relative improvement in performance gained by each successive stage. The experiments were conducted on the CUAVE database and our results highlight that the dynamics of the visual modality can be used to good effect to improve visual voice activity detection performance.
Index Terms: visual speech, voice activity detection, CUAVE database, static features, dynamic features
Bibliographic reference. Navarathna, Rajitha / Dean, David / Lucey, Patrick / Sridharan, Sridha / Fookes, Clinton (2010): "Cascading appearance-based features for visual voice activity detection", In AVSP-2010, paper S1-1.