Spoken Language ‘Grammatical Error Correction’

Yiting Lu, Mark J.F. Gales, Yu Wang


Spoken language ‘grammatical error correction’ (GEC) is an important mechanism to help learners of a foreign language, here English, improve their spoken grammar. GEC is challenging for non-native spoken language due to interruptions from disfluent speech events such as repetitions and false starts and issues in strictly defining what is acceptable in spoken language. Furthermore there is little labelled data to train models. One way to mitigate the impact of speech events is to use a disfluency detection (DD) model. Removing the detected disfluencies converts the speech transcript to be closer to written language, which has significantly more labelled training data. This paper considers two types of approaches to leveraging DD models to boost spoken GEC performance. One is sequential, a separately trained DD model acts as a pre-processing module providing a more structured input to the GEC model. The second approach is to train DD and GEC models in an end-to-end fashion, simultaneously optimising both modules. Embeddings enable end-to-end models to have a richer information flow. Experimental results show that DD effectively regulates GEC input; end-to-end training works well when fine-tuned on limited labelled in-domain data; and improving DD by incorporating acoustic information helps improve spoken GEC.


 DOI: 10.21437/Interspeech.2020-1852

Cite as: Lu, Y., Gales, M.J., Wang, Y. (2020) Spoken Language ‘Grammatical Error Correction’. Proc. Interspeech 2020, 3840-3844, DOI: 10.21437/Interspeech.2020-1852.


@inproceedings{Lu2020,
  author={Yiting Lu and Mark J.F. Gales and Yu Wang},
  title={{Spoken Language ‘Grammatical Error Correction’}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3840--3844},
  doi={10.21437/Interspeech.2020-1852},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1852}
}