INTERSPEECH 2011
12th Annual Conference of the International Speech Communication Association

Florence, Italy
August 27-31. 2011

Semi-Supervised Single-Channel Speech-Music Separation for Automatic Speech Recognition

Cemil Demir (1), A. Taylan Cemgil (2), Murat Saraçlar (2)

(1) TÜBİTAK-BİLGEM, Turkey
(2) Boğaziçi Üniversitesi, Turkey

In this study, we propose a semi-supervised speech-music separation method which uses the speech, music and speech-music segments in a given segmented audio signal to separate speech and music signals from each other in the mixed speech-music segments. In this strategy, we assume, the background music of the mixed signal is partially composed of the repetition of the music segment in the audio. Therefore, we used a mixture model to represent the music signal. The speech signal is modeled using Non-negative Matrix Factorization (NMF) model. The prior model of the template matrix of the NMF model is estimated using the speech segment and updated using the mixed segment of the audio. The separation performance of the proposed method is evaluated in automatic speech recognition task.

Full Paper

Bibliographic reference.  Demir, Cemil / Cemgil, A. Taylan / Saraçlar, Murat (2011): "Semi-supervised single-channel speech-music separation for automatic speech recognition", In INTERSPEECH-2011, 681-684.