Improving Unsupervised Sparsespeech Acoustic Models with Categorical Reparameterization

Benjamin Milde, Chris Biemann


The Sparsespeech model is an unsupervised acoustic model that can generate discrete pseudo-labels for untranscribed speech. We extend the Sparsespeech model to allow for sampling over a random discrete variable, yielding pseudo-posteriorgrams. The degree of sparsity in this posteriorgram can be fully controlled after the model has been trained. We use the Gumbel-Softmax trick to approximately sample from a discrete distribution in the neural network and this allows us to train the network efficiently with standard backpropagation. The new and improved model is trained and evaluated on the Libri-Light corpus, a benchmark for ASR with limited or no supervision. The model is trained on 600h and 6000h of English read speech. We evaluate the improved model using the ABX error measure and a semi-supervised setting with 10h of transcribed speech. We observe a relative improvement of up to 31.3% on ABX error rates within speakers and 22.5% across speakers with the improved Sparsespeech model on 600h of speech data and further improvements when scaling the model to 6000h.


 DOI: 10.21437/Interspeech.2020-2629

Cite as: Milde, B., Biemann, C. (2020) Improving Unsupervised Sparsespeech Acoustic Models with Categorical Reparameterization. Proc. Interspeech 2020, 2747-2751, DOI: 10.21437/Interspeech.2020-2629.


@inproceedings{Milde2020,
  author={Benjamin Milde and Chris Biemann},
  title={{Improving Unsupervised Sparsespeech Acoustic Models with Categorical Reparameterization}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2747--2751},
  doi={10.21437/Interspeech.2020-2629},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2629}
}