In this paper, a general framework of maximum mutual information (MMI) learning of mixture densities is developed based on the discriminative learning strategy, within which a family of probabilistic classifiers can be trained. Two case studies are presented concerning class dependent Gaussian mixture model (GMM) and its extension to the case of tying kernel across classes. The related learning algorithms are derived. In the speaker recognition experiment, each speaker is represented by a GMM. The algorithms train the models aiming at minimum error rate. A normalized distance is also introduced to speaker verification. Five algorithms are evaluated for comparison and 100% of speaker verification rate is obtained on a database of 200 French speakers.
Bibliographic reference. Li, Haizhou / Haton, Jean-Paul / Gong, Yifan (1995): "On MMI learning of Gaussian mixture for speaker models", In EUROSPEECH-1995, 363-366.