PublisherAssociation for Computational Linguistics
Rights accessOpen Access
Error analysis in machine translation is a necessary step in order to investigate the strengths and weaknesses of the MT systems under development and allow fair comparisons among them. This work presents an application that shows how a set of heterogeneous automatic metrics can be used to evaluate a test bed of automatic translations. To do so, we have set up an online graphical interface for the ASIYA
toolkit, a rich repository of evaluation
measures working at different linguistic levels. The current implementation of the interface shows constituency and dependency trees as well as shallow syntactic and semantic annotations, and word alignments. The intelligent visualization of the linguistic structures used by the metrics, as well as a set of navigational functionalities, may lead towards advanced methods for automatic error analysis.
CitationGonzalez, M.; Giménez, J.; Marquez, L. A Graphical Interface for MT Evaluation and Error Analysis. A: Annual Meeting of the Association for Computational Linguistics. "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics". Jeju: Association for Computational Linguistics, 2012, p. 139-144.
All rights reserved. This work is protected by the corresponding intellectual and industrial property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public communication or transformation of this work are prohibited without permission of the copyright holder. If you wish to make any use of the work not provided for in the law, please contact: firstname.lastname@example.org