A DNN-HMM-DNN Hybrid Model for Discovering Word-Like Units from Spoken Captions and Image Regions

Liming Wang, Mark Hasegawa-Johnson


Discovering word-like units without textual transcriptions is an important step in low-resource speech technology. In this work, we demonstrate a model inspired by statistical machine translation and hidden Markov model/deep neural network (HMM-DNN) hybrid systems. Our learning algorithm is capable of discovering the visual and acoustic correlates of K distinct words in an unknown language by simultaneously learning the mapping from image regions to concepts (the first DNN), the mapping from acoustic feature vectors to phones (the second DNN), and the optimum alignment between the two (the HMM). In the simulated low-resource setting using MSCOCO and SpeechCOCO datasets, our model achieves 62.4% alignment accuracy and outperforms the audio-only segmental embedded GMM approach on standard word discovery evaluation metrics.


 DOI: 10.21437/Interspeech.2020-1148

Cite as: Wang, L., Hasegawa-Johnson, M. (2020) A DNN-HMM-DNN Hybrid Model for Discovering Word-Like Units from Spoken Captions and Image Regions. Proc. Interspeech 2020, 1456-1460, DOI: 10.21437/Interspeech.2020-1148.


@inproceedings{Wang2020,
  author={Liming Wang and Mark Hasegawa-Johnson},
  title={{A DNN-HMM-DNN Hybrid Model for Discovering Word-Like Units from Spoken Captions and Image Regions}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1456--1460},
  doi={10.21437/Interspeech.2020-1148},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1148}
}