A Federated Approach in Training Acoustic Models

Dimitrios Dimitriadis, Kenichi Kumatani, Robert Gmyr, Yashesh Gaur, Sefik Emre Eskimez


In this paper, a novel platform for Acoustic Model training based on Federated Learning (FL) is described. This is the first attempt to introduce Federated Learning techniques in Speech Recognition (SR) tasks. Besides the novelty of the task, the paper describes an easily generalizable FL platform and presents the design decisions used for this task. Amongst the novel algorithms introduced is a hierarchical optimization scheme employing pairs of optimizers and an algorithm for gradient selection, leading to improvements in training time and SR performance. The gradient selection algorithm is based on weighting the gradients during the aggregation step. It effectively acts as a regularization process right before the gradient propagation. This process may address one of the FL challenges, i.e. training on vastly heterogeneous data. The experimental validation of the proposed system is based on the LibriSpeech task, presenting a speed-up of ×1.5 and 6% WERR. The proposed Federated Learning system appears to outperform the golden standard of distributed training in both convergence speed and overall model performance. Further improvements have been experienced in internal tasks.


 DOI: 10.21437/Interspeech.2020-1791

Cite as: Dimitriadis, D., Kumatani, K., Gmyr, R., Gaur, Y., Eskimez, S.E. (2020) A Federated Approach in Training Acoustic Models. Proc. Interspeech 2020, 981-985, DOI: 10.21437/Interspeech.2020-1791.


@inproceedings{Dimitriadis2020,
  author={Dimitrios Dimitriadis and Kenichi Kumatani and Robert Gmyr and Yashesh Gaur and Sefik Emre Eskimez},
  title={{A Federated Approach in Training Acoustic Models}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={981--985},
  doi={10.21437/Interspeech.2020-1791},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1791}
}