Towards Learning a Universal Non-Semantic Representation of Speech

Joel Shor, Aren Jansen, Ronnie Maor, Oran Lang, Omry Tuval, Félix de Chaumont Quitry, Marco Tagliasacchi, Ira Shavitt, Dotan Emanuel, Yinnon Haviv


The ultimate goal of transfer learning is to reduce labeled data requirements by exploiting a pre-existing embedding model trained for different datasets or tasks. The visual and language communities have established benchmarks to compare embeddings, but the speech community has yet to do so. This paper proposes a benchmark for comparing speech representations on non-semantic tasks, and proposes a representation based on an unsupervised triplet-loss objective. The proposed representation outperforms other representations on the benchmark, and even exceeds state-of-the-art performance on a number of transfer learning tasks. The embedding is trained on a publicly available dataset, and it is tested on a variety of low-resource downstream tasks, including personalization tasks and medical domain. The benchmark4, models5, and evaluation code6 are publicly released.


 DOI: 10.21437/Interspeech.2020-1242

Cite as: Shor, J., Jansen, A., Maor, R., Lang, O., Tuval, O., Quitry, F.D.C., Tagliasacchi, M., Shavitt, I., Emanuel, D., Haviv, Y. (2020) Towards Learning a Universal Non-Semantic Representation of Speech. Proc. Interspeech 2020, 140-144, DOI: 10.21437/Interspeech.2020-1242.


@inproceedings{Shor2020,
  author={Joel Shor and Aren Jansen and Ronnie Maor and Oran Lang and Omry Tuval and Félix de Chaumont Quitry and Marco Tagliasacchi and Ira Shavitt and Dotan Emanuel and Yinnon Haviv},
  title={{Towards Learning a Universal Non-Semantic Representation of Speech}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={140--144},
  doi={10.21437/Interspeech.2020-1242},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1242}
}