期刊名称:3L Language, Linguistics and Literature: The Southeast Asian Journal of English Language Studies
印刷版ISSN:0128-5157
出版年度:2017
卷号:23
期号:4
页码:251-264
DOI:10.17576/3L-2017-2304-19
语种:English
出版社:Penerbit UKM
摘要:Research in automated translation mostly aims to develop translation systems to further enhance the transfer of knowledge and information.This need of transfer has brought machine translation (MT) to show major steps in translation software development and encourages further research in various MT related areas.However,there have been no focused investigations of criteria for evaluation particularly evaluation that considers human evaluators and the reconciliation of human translation (HT) and MT.Thus,focusing on two attributes for evaluation,namely Accuracy and Intelligibility,a study was conducted to investigate translation evaluation criteria for content and language transfer through reconciliation of HT and MT evaluation based on human evaluators'perception.The study focused on human evaluators'expectation of range of criteria for HT and MT under the two attributes and the evaluation was tested on a machine system to observe the system's performance in terms of Accuracy and Intelligibility.This paper reports the range of criteria to evaluate translation in terms of Intelligibility as expected by human evaluators in HT and MT in terms of content and language transfer.The study uses a mixed method approach with soft data and hard data collection.The results demonstrate that the range of each criteria identified for content evaluation in HT is expected to be higher than in MT.The implications of the study are described to provide an understanding of evaluation for human and automated translation in terms of Intelligibility.