Vector-Quantized Autoregressive Predictive Coding

Yu-An Chung, Hao Tang, James Glass


Autoregressive Predictive Coding (APC), as a self-supervised objective, has enjoyed success in learning representations from large amounts of unlabeled data, and the learned representations are rich for many downstream tasks. However, the connection between low self-supervised loss and strong performance in downstream tasks remains unclear. In this work, we propose Vector-Quantized Autoregressive Predictive Coding (VQ-APC), a novel model that produces quantized representations, allowing us to explicitly control the amount of information encoded in the representations. By studying a sequence of increasingly limited models, we reveal the constituents of the learned representations. In particular, we confirm the presence of information with probing tasks, while showing the absence of information with mutual information, uncovering the model’s preference in preserving speech information as its capacity becomes constrained. We find that there exists a point where phonetic and speaker information are amplified to maximize a self-supervised objective. As a byproduct, the learned codes for a particular model capacity correspond well to English phones.


 DOI: 10.21437/Interspeech.2020-1228

Cite as: Chung, Y., Tang, H., Glass, J. (2020) Vector-Quantized Autoregressive Predictive Coding. Proc. Interspeech 2020, 3760-3764, DOI: 10.21437/Interspeech.2020-1228.


@inproceedings{Chung2020,
  author={Yu-An Chung and Hao Tang and James Glass},
  title={{Vector-Quantized Autoregressive Predictive Coding}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3760--3764},
  doi={10.21437/Interspeech.2020-1228},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1228}
}