Unconditional Audio Generation with Generative Adversarial Networks and Cycle Regularization

Jen-Yu Liu, Yu-Hua Chen, Yin-Cheng Yeh, Yi-Hsuan Yang


In a recent paper, we have presented a generative adversarial network (GAN)-based model for unconditional generation of the mel-spectrograms of singing voices. As the generator of the model is designed to take a variable-length sequence of noise vectors as input, it can generate mel-spectrograms of variable length. However, our previous listening test shows that the quality of the generated audio leaves room for improvement. The present paper extends and expands that previous work in the following aspects. First, we employ a hierarchical architecture in the generator to induce some structure in the temporal dimension. Second, we introduce a cycle regularization mechanism to the generator to avoid mode collapse. Third, we evaluate the performance of the new model not only for generating singing voices, but also for generating speech voices. Evaluation result shows that new model outperforms the prior one both objectively and subjectively. We also employ the model to unconditionally generate sequences of piano and violin music and find the result promising. Audio examples, as well as the code for implementing our model, will be publicly available online upon paper publication.


 DOI: 10.21437/Interspeech.2020-1137

Cite as: Liu, J., Chen, Y., Yeh, Y., Yang, Y. (2020) Unconditional Audio Generation with Generative Adversarial Networks and Cycle Regularization. Proc. Interspeech 2020, 1997-2001, DOI: 10.21437/Interspeech.2020-1137.


@inproceedings{Liu2020,
  author={Jen-Yu Liu and Yu-Hua Chen and Yin-Cheng Yeh and Yi-Hsuan Yang},
  title={{Unconditional Audio Generation with Generative Adversarial Networks and Cycle Regularization}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1997--2001},
  doi={10.21437/Interspeech.2020-1137},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1137}
}