Multi-Accent Adaptation Based on Gate Mechanism

Han Zhu, Li Wang, Pengyuan Zhang, Yonghong Yan

When only a limited amount of accented speech data is available, to promote multi-accent speech recognition performance, the conventional approach is accent-specific adaptation, which adapts the baseline model to multiple target accents independently. To simplify the adaptation procedure, we explore adapting the baseline model to multiple target accents simultaneously with multi-accent mixed data. Thus, we propose using accent-specific top layer with gate mechanism (AST-G) to realize multi-accent adaptation. Compared with the baseline model and accent-specific adaptation, AST-G achieves 9.8% and 1.9% average relative WER reduction respectively. However, in real-world applications, we can’t obtain the accent category label for inference in advance. Therefore, we apply using an accent classifier to predict the accent label. To jointly train the acoustic model and the accent classifier, we propose the multi-task learning with gate mechanism (MTL-G). As the accent label prediction could be inaccurate, it performs worse than the accent-specific adaptation. Yet, in comparison with the baseline model, MTL-G achieves 5.1% average relative WER reduction.

 DOI: 10.21437/Interspeech.2019-3155

Cite as: Zhu, H., Wang, L., Zhang, P., Yan, Y. (2019) Multi-Accent Adaptation Based on Gate Mechanism. Proc. Interspeech 2019, 744-748, DOI: 10.21437/Interspeech.2019-3155.

  author={Han Zhu and Li Wang and Pengyuan Zhang and Yonghong Yan},
  title={{Multi-Accent Adaptation Based on Gate Mechanism}},
  booktitle={Proc. Interspeech 2019},