Improved RawNet with Feature Map Scaling for Text-Independent Speaker Verification Using Raw Waveforms

Jee-weon Jung, Seung-bin Kim, Hye-jin Shim, Ju-ho Kim, Ha-Jin Yu


Recent advances in deep learning have facilitated the design of speaker verification systems that directly input raw waveforms. For example, RawNet [1] extracts speaker embeddings from raw waveforms, which simplifies the process pipeline and demonstrates competitive performance. In this study, we improve RawNet by scaling feature maps using various methods. The proposed mechanism utilizes a scale vector that adopts a sigmoid non-linear function. It refers to a vector with dimensionality equal to the number of filters in a given feature map. Using a scale vector, we propose to scale the feature map multiplicatively, additively, or both. In addition, we investigate replacing the first convolution layer with the sinc-convolution layer of SincNet. Experiments performed on the VoxCeleb1 evaluation dataset demonstrate the effectiveness of the proposed methods, and the best performing system reduces the equal error rate by half compared to the original RawNet. Expanded evaluation results obtained using the VoxCeleb1-E and VoxCeleb-H protocols marginally outperform existing state-of-the-art systems.


 DOI: 10.21437/Interspeech.2020-1011

Cite as: Jung, J., Kim, S., Shim, H., Kim, J., Yu, H. (2020) Improved RawNet with Feature Map Scaling for Text-Independent Speaker Verification Using Raw Waveforms. Proc. Interspeech 2020, 1496-1500, DOI: 10.21437/Interspeech.2020-1011.


@inproceedings{Jung2020,
  author={Jee-weon Jung and Seung-bin Kim and Hye-jin Shim and Ju-ho Kim and Ha-Jin Yu},
  title={{Improved RawNet with Feature Map Scaling for Text-Independent Speaker Verification Using Raw Waveforms}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1496--1500},
  doi={10.21437/Interspeech.2020-1011},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1011}
}