4th International Conference on Spoken Language Processing
Philadelphia, PA, USA
We describe an improved method of integrating audio and visual information in a HMM-based audiovisual ASR system. The method uses a modified semicontinuous HMM (SCHMM) for integration and recognition. Our results show substantial improvements over earlier integration methods at high noise levels. Our integration method relies on the assumption that, as environmental conditions deviate from those under which training occurred, the underlying probability distributions will also change. We use phoneme based SCHMMs for classification of isolated words. The probability models underlying the standard SCHMM are Gaussian; thus, low probability estimates will tend to be associated with high confidences (small differences in the feature values cause large proportional differences in probabilities, when the values are in the tail of the distribution). Therefore, during classification, we replace each Gaussian with a scoring function which looks Gaussian near the mean of the distribution but has a heavier tail. We report results comparing this method with an audio-only system and with previous integration methods. At high noise levels, the system with modified scoring functions shows a better than 50 recognition does suffer when noise is low. Methods which can adjust the relative weight of the audio and visual information can still potentially outperform the new method, provided that a reliable way of choosing those weights can be found.
Bibliographic reference. Su, Qin / Silsbee, Peter L. (1996): "Robust audiovisual integration using semicontinuous hidden Markov models", In ICSLP-1996, 42-45.