From bilingual to multilingual neural machine translation by incremental training

View/Open
Document typeConference lecture
Defense date2019
PublisherAssociation for Computational Linguistics
Rights accessOpen Access
Abstract
Multilingual Neural Machine Translation approaches are based on the use of task specific models and the addition of one more language can only be done by retraining the whole system. In this work, we propose a new training schedule that allows the system to scale to more languages without modification of the previous components based on joint training and language-independent encoder/decoder modules allowing for zero-shot translation. This work in progress shows close results to state-of-the-art in the WMT task.
CitationEscolano, C.; Ruiz, M.; Fonollosa, J. A. R. From bilingual to multilingual neural machine translation by incremental training. A: Annual Meeting of the Association for Computational Linguistics. "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop". Stroudsburg, PA: Association for Computational Linguistics, 2019, p. 236-242.
Publisher versionhttps://www.aclweb.org
Other identifiershttps://www.aclweb.org/anthology/P19-2033
Collections
- Departament de Ciències de la Computació - Ponències/Comunicacions de congressos [1.118]
- Doctorat en Teoria del Senyal i Comunicacions - Ponències/Comunicacions de congressos [81]
- VEU - Grup de Tractament de la Parla - Ponències/Comunicacions de congressos [410]
- Departament de Teoria del Senyal i Comunicacions - Ponències/Comunicacions de congressos [3.041]
Files | Description | Size | Format | View |
---|---|---|---|---|
P19-2033.pdf | 308,0Kb | View/Open |
Except where otherwise noted, content on this work
is licensed under a Creative Commons license
:
Attribution 3.0 Spain