An Open-Source Voice Type Classifier for Child-Centered Daylong Recordings

Marvin Lavechin, Ruben Bousbib, Hervé Bredin, Emmanuel Dupoux, Alejandrina Cristia


Spontaneous conversations in real-world settings such as those found in child-centered recordings have been shown to be amongst the most challenging audio files to process. Nevertheless, building speech processing models handling such a wide variety of conditions would be particularly useful for language acquisition studies in which researchers are interested in the quantity and quality of the speech that children hear and produce, as well as for early diagnosis and measuring effects of remediation. In this paper, we present our approach to designing an open-source neural network to classify audio segments into vocalizations produced by the child wearing the recording device, vocalizations produced by other children, adult male speech, and adult female speech. To this end, we gathered diverse child-centered corpora which sums up to a total of 260 hours of recordings and covers 10 languages. Our model can be used as input for downstream tasks such as estimating the number of words produced by adult speakers, or the number of linguistic units produced by children. Our architecture combines SincNet filters with a stack of recurrent layers and outperforms by a large margin the state-of-the-art system, the Language ENvironment Analysis (LENA) that has been used in numerous child language studies.


 DOI: 10.21437/Interspeech.2020-1690

Cite as: Lavechin, M., Bousbib, R., Bredin, H., Dupoux, E., Cristia, A. (2020) An Open-Source Voice Type Classifier for Child-Centered Daylong Recordings. Proc. Interspeech 2020, 3072-3076, DOI: 10.21437/Interspeech.2020-1690.


@inproceedings{Lavechin2020,
  author={Marvin Lavechin and Ruben Bousbib and Hervé Bredin and Emmanuel Dupoux and Alejandrina Cristia},
  title={{An Open-Source Voice Type Classifier for Child-Centered Daylong Recordings}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3072--3076},
  doi={10.21437/Interspeech.2020-1690},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1690}
}