5th International Conference on Spoken Language Processing

Sydney, Australia
November 30 - December 4, 1998

Recurrent Substrings and Data Fusion for Language Recognition

Harvey Lloyd-Thomas (1), Eluned S. Parris (1), Jeremy H. Wright (2)

(1) Ensigma, UK
(2) AT&T, USA

Recurrent phone substrings that are characteristic of a language are a promising technique for language recognition. In previous work on language recognition, building anti-models to normalise the scores from acoustic phone models for target languages, has been shown to reduce the Equal Error Rate (EER) by a third. Recurrent substrings and anti-models have now been applied alongside three other techniques (bigrams, usefulness and frequency histograms) to the NIST 1996 Language Recognition Evaluation, using data from the CALLFRIEND and OGI databases for training. By fusing scores from the different techniques using a multi-layer perceptron the ERR on the NIST data can be reduced further.

Full Paper

Bibliographic reference.  Lloyd-Thomas, Harvey / Parris, Eluned S. / Wright, Jeremy H. (1998): "Recurrent substrings and data fusion for language recognition", In ICSLP-1998, paper 1061.