International Workshop on Spoken Language Translation (IWSLT) 2008

Honolulu, Hawaii, USA
October 20-21, 2008

Statistical Machine Translation without Long Parallel Sentences for Training Data

Jin'ichi Murakami, Masato Tokuhisa, Satoru Ikehara

Department of Information and Knowledge, Engineering Faculty of Engineering, Tottori University, Japan

In this study, we paid attention to the reliability of phrase table. We have been used the phrase table using Och's method. And this method sometimes generate completely wrong phrase tables. We found that such phrase table caused by long parallel sentences. Therefore, we removed these long parallel sentences from training data. Also, we utilized general tools for statistical machine translation, such as ”Giza++”, ”moses”, and ”training-phrasemodel. perl”.
   We obtained a BLEU score of 0.4047 (TEXT) and 0.3553(1-BEST) of the Challenge-EC task for our proposed method. On the other hand, we obtained a BLEU score of 0.3975(TEXT) and 0.3482(1-BEST) of the Challenge-EC task for a standard method. This means that our proposed method was effective for the Challenge-EC task. However, it was not effective for the BTECT-CE and Challenge-CE tasks. And our system was not good performance. For example, our system was the 7th place among 8 system for Challenge-EC task.

Full Paper     Presentation (pdf)

Bibliographic reference.  Murakami, Jin'ichi / Tokuhisa, Masato / Ikehara, Satoru (2008): "Statistical machine translation without long parallel sentences for training data", In IWSLT-2008, 132-137.