Siamese Autoencoders for Speech Style Extraction and Switching Applied to Voice Identification and Conversion

Seyed Hamidreza Mohammadi, Alexander Kain


We propose an architecture called siamese autoencoders for extracting and switching pre-determined styles of speech signals while retaining the content. We apply this architecture to a voice conversion task in which we define the content to be the linguistic message and the style to be the speaker’s voice. We assume two or more data streams with the same content but unique styles. The architecture is composed of two or more separate but shared-weight autoencoders that are joined by loss functions at the hidden layers. A hidden vector is composed of style and content sub-vectors and the loss functions constrain the encodings to decompose style and content. We can select an intended target speaker either by supplying the associated style vector, or by extracting a new style vector from a new utterance, using a proposed style extraction algorithm. We focus on in-training speakers but perform some initial experiments for out-of-training speakers as well. We propose and study several types of loss functions. The experiment results show that the proposed many-to-many model is able to convert voices successfully; however, its performance does not surpass that of the state-of-the-art one-to-one model’s.


 DOI: 10.21437/Interspeech.2017-1434

Cite as: Mohammadi, S.H., Kain, A. (2017) Siamese Autoencoders for Speech Style Extraction and Switching Applied to Voice Identification and Conversion. Proc. Interspeech 2017, 1293-1297, DOI: 10.21437/Interspeech.2017-1434.


@inproceedings{Mohammadi2017,
  author={Seyed Hamidreza Mohammadi and Alexander Kain},
  title={Siamese Autoencoders for Speech Style Extraction and Switching Applied to Voice Identification and Conversion},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={1293--1297},
  doi={10.21437/Interspeech.2017-1434},
  url={http://dx.doi.org/10.21437/Interspeech.2017-1434}
}