Ir al contenido (pulsa Retorno)

Universitat Politècnica de Catalunya

    • Català
    • Castellano
    • English
    • LoginRegisterLog in (no UPC users)
  • mailContact Us
  • world English 
    • Català
    • Castellano
    • English
  • userLogin   
      LoginRegisterLog in (no UPC users)

UPCommons. Global access to UPC knowledge

58.768 UPC E-Prints
You are here:
View Item 
  •   DSpace Home
  • E-prints
  • Grups de recerca
  • VEU - Grup de Tractament de la Parla
  • Ponències/Comunicacions de congressos
  • View Item
  •   DSpace Home
  • E-prints
  • Grups de recerca
  • VEU - Grup de Tractament de la Parla
  • Ponències/Comunicacions de congressos
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

A differentiable BLEU loss. Analysis and first results

Thumbnail
View/Open
e57a49a5ea21b841d2546ba9aade2d627010d8e0.pdf (421,9Kb)
Share:
 
  View Usage Statistics
Cita com:
hdl:2117/117201

Show full item record
Casas Manzanares, Noé
Rodríguez Fonollosa, José AdriánMés informacióMés informacióMés informació
Ruiz Costa-Jussà, MartaMés informacióMés informació
Document typeConference report
Defense date2018
Rights accessOpen Access
Attribution-NonCommercial-NoDerivs 3.0 Spain
Except where otherwise noted, content on this work is licensed under a Creative Commons license : Attribution-NonCommercial-NoDerivs 3.0 Spain
ProjectTECNOLOGIAS DE APRENDIZAJE PROFUNDO APLICADAS AL PROCESADO DE VOZ Y AUDIO (MINECO-TEC2015-69266-P)
AUTONOMOUS LIFELONG LEARNING INTELLIGENT SYSTEMS (AEI-PCIN-2017-079)
Abstract
In natural language generation tasks, like neural machine translation and image captioning, there is usually a mismatch between the optimized loss and the de facto evaluation criterion, namely token-level maximum likelihood and corpus-level BLEU score. This article tries to reduce this gap by defining differentiable computations of the BLEU and GLEU scores. We test this approach on simple tasks, obtaining valuable lessons on its potential applications but also its pitfalls, mainly that these loss functions push each token in the hypothesis sequence toward the average of the tokens in the reference, resulting in a poor training signal.
CitationCasas, N., Fonollosa, José A. R., Ruiz, M. A differentiable BLEU loss. Analysis and first results. A: International Conference on Learning Representations. "ICLR 2018 Workshop Track: 6th International Conference on Learning Representations: Vancouver Convention Center, Vancouver, BC, Canada: April 30-May 3, 2018". 2018. 
URIhttp://hdl.handle.net/2117/117201
Publisher versionhttps://openreview.net/forum?id=HkG7hzyvf
Collections
  • VEU - Grup de Tractament de la Parla - Ponències/Comunicacions de congressos [436]
  • Departament de Teoria del Senyal i Comunicacions - Ponències/Comunicacions de congressos [3.210]
Share:
 
  View Usage Statistics

Show full item record

FilesDescriptionSizeFormatView
e57a49a5ea21b841d2546ba9aade2d627010d8e0.pdf421,9KbPDFView/Open

Browse

This CollectionBy Issue DateAuthorsOther contributionsTitlesSubjectsThis repositoryCommunities & CollectionsBy Issue DateAuthorsOther contributionsTitlesSubjects

© UPC Obrir en finestra nova . Servei de Biblioteques, Publicacions i Arxius

info.biblioteques@upc.edu

  • About This Repository
  • Contact Us
  • Send Feedback
  • Inici de la pàgina