Learning a Translation Model from Word Lattices

Oliver Adams, Graham Neubig, Trevor Cohn, Steven Bird

Translation models have been used to improve automatic speech recognition when speech input is paired with a written translation, primarily for the task of computer-aided translation. Existing approaches require large amounts of parallel text for training the translation models, but for many language pairs this data is not available. We propose a model for learning lexical translation parameters directly from the word lattices for which a transcription is sought. The model is expressed through composition of each lattice with a weighted finite-state transducer representing the translation model, where inference is performed by sampling paths through the composed finite-state transducer. We show consistent word error rate reductions in two datasets, using between just 20 minutes and 4 hours of speech input, additionally outperforming a translation model trained on the 1-best path.

DOI: 10.21437/Interspeech.2016-862

Cite as

Adams, O., Neubig, G., Cohn, T., Bird, S. (2016) Learning a Translation Model from Word Lattices. Proc. Interspeech 2016, 2518-2522.

author={Oliver Adams and Graham Neubig and Trevor Cohn and Steven Bird},
title={Learning a Translation Model from Word Lattices},
booktitle={Interspeech 2016},