Quaternion Convolutional Neural Networks for End-to-End Automatic Speech Recognition

Titouan Parcollet, Ying Zhang, Mohamed Morchid, Chiheb Trabelsi, Georges Linares, Renato de Mori, Yoshua Bengio

Recently, the connectionist temporal classification (CTC) model coupled with recurrent (RNN) or convolutional neural networks (CNN), made it easier to train speech recognition systems in an end-to-end fashion. However in real-valued models, time frame components such as mel-filter-bank energies and the cepstral coefficients obtained from them, together with their first and second order derivatives, are processed as individual elements, while a natural alternative is to process such components as composed entities. We propose to group such elements in the form of quaternions and to process these quaternions using the established quaternion algebra. Quaternion numbers and quaternion neural networks have shown their efficiency to process multidimensional inputs as entities, to encode internal dependencies and to solve many tasks with less learning parameters than real-valued models. This paper proposes to integrate multiple feature views in quaternion-valued convolutional neural network (QCNN), to be used for sequence-to-sequence mapping with the CTC model. Promising results are reported using simple QCNNs in phoneme recognition experiments with the TIMIT corpus. More precisely, QCNNs obtain a lower phoneme error rate (PER) with less learning parameters than a competing model based on real-valued CNNs.

 DOI: 10.21437/Interspeech.2018-1898

Cite as: Parcollet, T., Zhang, Y., Morchid, M., Trabelsi, C., Linares, G., de Mori, R., Bengio, Y. (2018) Quaternion Convolutional Neural Networks for End-to-End Automatic Speech Recognition. Proc. Interspeech 2018, 22-26, DOI: 10.21437/Interspeech.2018-1898.

  author={Titouan Parcollet and Ying Zhang and Mohamed Morchid and Chiheb Trabelsi and Georges Linares and Renato {de Mori} and Yoshua Bengio},
  title={Quaternion Convolutional Neural Networks for End-to-End Automatic Speech Recognition},
  booktitle={Proc. Interspeech 2018},