Automatic Speech Recognition Benchmark for Air-Traffic Communications

Juan Zuluaga-Gomez, Petr Motlicek, Qingran Zhan, Karel Veselý, Rudolf Braun


Advances in Automatic Speech Recognition (ASR) over the last decade opened new areas of speech-based automation such as in Air-Traffic Control (ATC) environments. Currently, voice communication and data links communications are the only way of contact between pilots and Air-Traffic Controllers (ATCo), where the former is the most widely used and the latter is a non-spoken method mandatory for oceanic messages and limited for some domestic issues. ASR systems on ATCo environments inherit increasing complexity due to accents from non-English speakers, cockpit noise, speaker-dependent biases and small in-domain ATC databases for training. Hereby, we introduce CleanSky EC-H2020 ATCO2, a project that aims to develop an ASR-based platform to collect, organize and automatically pre-process ATCo speech-data from air space. This paper conveys an exploratory benchmark of several state-of-the-art ASR models trained on more than 170 hours of ATCo speech-data. We demonstrate that the cross-accent flaws due to speakers’ accents are minimized due to the amount of data, making the system feasible for ATC environments. The developed ASR system achieves an averaged word error rate (WER) of 7.75% across four databases. An additional 35% relative improvement in WER is achieved on one test set when training a TDNNF system with byte-pair encoding.


 DOI: 10.21437/Interspeech.2020-2173

Cite as: Zuluaga-Gomez, J., Motlicek, P., Zhan, Q., Veselý, K., Braun, R. (2020) Automatic Speech Recognition Benchmark for Air-Traffic Communications. Proc. Interspeech 2020, 2297-2301, DOI: 10.21437/Interspeech.2020-2173.


@inproceedings{Zuluaga-Gomez2020,
  author={Juan Zuluaga-Gomez and Petr Motlicek and Qingran Zhan and Karel Veselý and Rudolf Braun},
  title={{Automatic Speech Recognition Benchmark for Air-Traffic Communications}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2297--2301},
  doi={10.21437/Interspeech.2020-2173},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2173}
}