Evaluating the underlying gender bias in contextualized word embeddings
Document typeConference report
PublisherAssociation for Computational Linguistics
Rights accessOpen Access
Gender bias is highly impacting natural language processing applications. Word embeddings have clearly been proven both to keep and amplify gender biases that are present in current data sources. Recently, contextualized word embeddings have enhanced previous word embedding techniques by computing word vector representations dependent on the sentence they appear in. In this paper, we study the impact of this conceptual change in the word embedding computation in relation with gender bias. Our analysis includes different measures previously applied in the literature to standard word embeddings. Our findings suggest that contextualized word embeddings are less biased than standard ones even when the latter are debiased.
CitationBasta, C.; Costa-jussà, M. R.; Casas, N. Evaluating the underlying gender bias in contextualized word embeddings. A: ACL Workshop on Gender Bias in Natural Language Processing. "The 2019 Conferenceof the North American Chapter of the Association for Computational Linguistics:Human Language Technologies: NAACL HLT 2019: Proceedings of the Conference: June 2-June 7, 2019". Stroudsburg, PA: Association for Computational Linguistics, 2019, p. 33-39.
All rights reserved. This work is protected by the corresponding intellectual and industrial property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public communication or transformation of this work are prohibited without permission of the copyright holder