A graphical interface for MT evaluation and error analysis

View/Open
Cita com:
hdl:2117/17980
Document typeConference lecture
Defense date2012
PublisherAssociation for Computational Linguistics
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial
property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public
communication or transformation of this work are prohibited without permission of the copyright holder
Abstract
Error analysis in machine translation is a necessary step in order to investigate the strengths and weaknesses of the MT systems under development and allow fair comparisons among them. This work presents an application that shows how a set of heterogeneous automatic metrics can be used to evaluate a test bed of automatic translations. To do so, we have set up an online graphical interface for the ASIYA
toolkit, a rich repository of evaluation
measures working at different linguistic levels. The current implementation of the interface shows constituency and dependency trees as well as shallow syntactic and semantic annotations, and word alignments. The intelligent visualization of the linguistic structures used by the metrics, as well as a set of navigational functionalities, may lead towards advanced methods for automatic error analysis.
CitationGonzalez, M.; Giménez, J.; Marquez, L. A Graphical Interface for MT Evaluation and Error Analysis. A: Annual Meeting of the Association for Computational Linguistics. "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics". Jeju: Association for Computational Linguistics, 2012, p. 139-144.
Files | Description | Size | Format | View |
---|---|---|---|---|
Demo024.pdf | article demo asiya acl 2012 | 331,5Kb | View/Open |