Linguistic-based evaluation criteria to identify statistical machine translation errors

View/Open
Cita com:
hdl:2117/7492
Document typeConference lecture
Defense date2010-05
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial
property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public
communication or transformation of this work are prohibited without permission of the copyright holder
Abstract
Machine translation evaluation methods
are highly necessary in order to analyze the
performance of translation systems. Up to
now, the most traditional methods are the
use of automatic measures such as BLEU
or the quality perception performed by native
human evaluations. In order to complement
these traditional procedures, the
current paper presents a new human evaluation
based on the expert knowledge about
the errors encountered at several linguistic
levels: orthographic, morphological, lexical,
semantic and syntactic. The results obtained
in these experiments show that some
linguistic errors could have more influence
than other at the time of performing a perceptual evaluation.
CitationFarrús, M. [et al.]. Linguistic-based evaluation criteria to identify statistical machine translation errors. A: Annual Conference of the European Association for Machine Translation. "14th Annual Conference of the European Association for Machine Translation". Saint-Raphaël: 2010, p. 167-173.
Publisher versionhttp://www.lsi.upc.edu/~nlp/papers/farrus.eamt2010.pdf
Files | Description | Size | Format | View |
---|---|---|---|---|
Farrús2010.pdf | EAMT2010 | 158,0Kb | View/Open |