Identifying useful human correction feedback from an on-line machine translation service
Document typeConference report
Rights accessOpen Access
Post-editing feedback provided by users of on-line translation services offers an excellent opportunity for automatic improvement of statistical machine translation (SMT) systems. However, feedback provided by casual users is very noisy, and must be automatically filtered in order to identify the potentially useful cases. We present a study on automatic feedback filtering in a real weblog collected from Reverso.net. We extend and re-annotate a training corpus, define an extended set of simple features and approach the problem as a binary classification task, experimenting with linear and kernelbased classifiers and feature selection. Results on the feedback filtering task show a significant improvement over the majority class, but also a precision ceiling around 70-80%. This reflects the inherent difficulty of the problem and indicates that shallow features cannot fully capture the semantic nature of the problem. Despite the modest results on the filtering task, the classifiers are proven effective in an application-based evaluation. The incorporation of a filtered set of feedback instances selected from a larger corpus significantly improves the performance of a phrase-based SMT system, according to a set of standard evaluation metrics.
CitationBarron-Cedeño, A. [et al.]. Identifying useful human correction feedback from an on-line machine translation service. A: International Joint Conference on Artificial Intelligence. "Proceedings of 23rd Internacional Joint Conference on Artificial Intelligence". Beijing: 2013, p. 2057-2063.
- GPLN - Grup de Processament del Llenguatge Natural - Ponències/Comunicacions de congressos 
- SOCO - Soft Computing - Ponències/Comunicacions de congressos 
- Departament de Ciències de la Computació - Ponències/Comunicacions de congressos 
- Departament de Teoria del Senyal i Comunicacions - Ponències/Comunicacions de congressos