Improving Opus Low Bit Rate Quality with Neural Speech Synthesis

Jan Skoglund, Jean-Marc Valin


The voice mode of the Opus audio coder can compress wideband speech at bit rates ranging from 6 kb/s to 40 kb/s. However, Opus is at its core a waveform matching coder, and as the rate drops below 10 kb/s, quality degrades quickly. As the rate reduces even further, parametric coders tend to perform better than waveform coders. In this paper we propose a backward-compatible way of improving low bit rate Opus quality by resynthesizing speech from the decoded parameters. We compare two different neural generative models, WaveNet and LPCNet. WaveNet is a powerful, high-complexity, and high-latency architecture that is not feasible for a practical system, yet provides a best known achievable quality with generative models. LPCNet is a low-complexity, low-latency RNN-based generative model, and practically implementable on mobile phones. We apply these systems with parameters from Opus coded at 6 kb/s as conditioning features for the generative models. A listening test shows that for the same 6 kb/s Opus bit stream, synthesized speech using LPCNet clearly outperforms the output of the standard Opus decoder. This opens up ways to improve the decoding quality of existing speech and audio waveform coders without breaking compatibility.


 DOI: 10.21437/Interspeech.2020-2939

Cite as: Skoglund, J., Valin, J. (2020) Improving Opus Low Bit Rate Quality with Neural Speech Synthesis. Proc. Interspeech 2020, 2847-2851, DOI: 10.21437/Interspeech.2020-2939.


@inproceedings{Skoglund2020,
  author={Jan Skoglund and Jean-Marc Valin},
  title={{Improving Opus Low Bit Rate Quality with Neural Speech Synthesis}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2847--2851},
  doi={10.21437/Interspeech.2020-2939},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2939}
}