UPC-CORE : What can machine translation evaluation metrics and Wikipedia do for estimating semantic textual similarity?

Cita com:
hdl:2117/20375
Document typeConference lecture
Defense date2013
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial
property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public
communication or transformation of this work are prohibited without permission of the copyright holder
Abstract
In this paper we discuss our participation to
the 2013 Semeval Semantic Textual Similarity
task. Our core features include (i) a set of metrics borrowed from automatic machine translation, originally intended to evaluate automatic against reference translations and (ii) an instance of explicit semantic analysis, built upon opening paragraphs of Wikipedia 2010 articles. Our similarity estimator relies on a support vector regressor with RBF kernel. Our best approach required 13 machine translation metrics + explicit semantic analysis and ranked 65 in the competition. Our postcompetition
analysis shows that the features have a good expression level, but overfitting and —mainly— normalization issues caused our correlation values to decrease.
CitationBarron-Cedeño, A. [et al.]. UPC-CORE : What can machine translation evaluation metrics and Wikipedia do for estimating semantic textual similarity?. A: Joint Conference on Lexical and Computational Semantics. "*SEM 2013: The Second Joint Conference on Lexical and Computational Semantics". Atlanta: 2013, p. 1-5.
Files | Description | Size | Format | View |
---|---|---|---|---|
60_Paper.pdf | In this paper we discuss our participation to the 2013 Semeval Semantic Textual Similarity task. Our core features include (i) a set of met- rics borrowed from automatic machine trans- lation, originally intended to evaluate auto- matic against reference translations and (ii) an instance of explicit semantic analysis, built upon opening paragraphs of Wikipedia 2010 articles. Our similarity estimator relies on a support vector regressor with RBF kernel. Our best approach required 13 machine transla- tion metrics + explicit semantic analysis and ranked 65 in the competition. Our post- competition analysis shows that the features have a good expression level, but overfitting and —mainly— normalization issues caused our correlation values to decrease. | 57,80Kb | View/Open |