Context-Dependent Acoustic Modeling Without Explicit Phone Clustering

Tina Raissi, Eugen Beck, Ralf Schlüter, Hermann Ney


Phoneme-based acoustic modeling of large vocabulary automatic speech recognition takes advantage of phoneme context. The large number of context-dependent (CD) phonemes and their highly varying statistics require tying or smoothing to enable robust training. Usually, Classification and Regression Trees are used for phonetic clustering, which is standard in Hidden Markov Model (HMM)-based systems. However, this solution introduces a secondary training objective and does not allow for end-to-end training. In this work, we address a direct phonetic context modeling for the hybrid Deep Neural Network (DNN)/HMM, that does not build on any phone clustering algorithm for the determination of the HMM state inventory. By performing different decompositions of the joint probability of the center phoneme state and its left and right contexts, we obtain a factorized network consisting of different components, trained jointly. Moreover, the representation of the phonetic context for the network relies on phoneme embeddings. The recognition accuracy of our proposed models on the Switchboard task is comparable and outperforms slightly the hybrid model using the standard state-tying decision trees.


 DOI: 10.21437/Interspeech.2020-1244

Cite as: Raissi, T., Beck, E., Schlüter, R., Ney, H. (2020) Context-Dependent Acoustic Modeling Without Explicit Phone Clustering. Proc. Interspeech 2020, 4377-4381, DOI: 10.21437/Interspeech.2020-1244.


@inproceedings{Raissi2020,
  author={Tina Raissi and Eugen Beck and Ralf Schlüter and Hermann Ney},
  title={{Context-Dependent Acoustic Modeling Without Explicit Phone Clustering}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4377--4381},
  doi={10.21437/Interspeech.2020-1244},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1244}
}