Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.
Digital Object Identifier (DOI) : 10.14569/IJACSA.2013.040109
Article Published in International Journal of Advanced Computer Science and Applications(IJACSA), Volume 4 Issue 1, 2013.
Abstract: This study aims to compare the effectiveness of two popular machine translation systems (Google Translate and Babylon machine translation system) used to translate English sentences into Arabic relative to the effectiveness of English to Arabic human translation. There are many automatic methods used to evaluate different machine translators, one of these methods; Bilingual Evaluation Understudy (BLEU) method, which was adopted and implemented to achieve the main goal of this study. BLEU method is based on the assumptions of automated measures that depend on matching machine translators' output to human reference translations; the higher the score, the closer the translation to the human translation will be. Well known English sayings in addition to manually collected sentences from different Internet web sites were used for evaluation purposes. The results of this study have showed that Google machine translation system is better than Babylon machine translation system in terms of precision of translation from English to Arabic.
Mohammed N. Al-Kabi, Taghreed M. Hailat, Emad M. Al-Shawakfa and Izzat M. Alsmadi, “Evaluating English to Arabic Machine Translation Using BLEU” International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013. http://dx.doi.org/10.14569/IJACSA.2013.040109