JukeBox: A Multilingual Singer Recognition Dataset

Anurag Chowdhury, Austin Cozzo, Arun Ross


A text-independent speaker recognition system relies on successfully encoding speech factors such as vocal pitch, intensity, and timbre to achieve good performance. A majority of such systems are trained and evaluated using spoken voice or everyday conversational voice data. Spoken voice, however, exhibits a limited range of possible speaker dynamics, thus constraining the utility of the derived speaker recognition models. Singing voice, on the other hand, covers a broader range of vocal and ambient factors and can, therefore, be used to evaluate the robustness of a speaker recognition system. However, a majority of existing speaker recognition datasets only focus on the spoken voice. In comparison, there is a significant shortage of labeled singing voice data suitable for speaker recognition research. To address this issue, we assemble JukeBox — a speaker recognition dataset with multilingual singing voice audio annotated with singer identity, gender, and language labels. We use the current state-of-the-art methods to demonstrate the difficulty of performing speaker recognition on singing voice using models trained on spoken voice alone. We also evaluate the effect of gender and language on speaker recognition performance, both in spoken and singing voice data. The complete JukeBox dataset can be accessed at http://iprobe.cse.msu.edu/datasets/jukebox.html


 DOI: 10.21437/Interspeech.2020-2972

Cite as: Chowdhury, A., Cozzo, A., Ross, A. (2020) JukeBox: A Multilingual Singer Recognition Dataset. Proc. Interspeech 2020, 2267-2271, DOI: 10.21437/Interspeech.2020-2972.


@inproceedings{Chowdhury2020,
  author={Anurag Chowdhury and Austin Cozzo and Arun Ross},
  title={{JukeBox: A Multilingual Singer Recognition Dataset}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2267--2271},
  doi={10.21437/Interspeech.2020-2972},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2972}
}