On the locality of attention in direct speech translation

View/Open
Cita com:
hdl:2117/369036
Document typeConference lecture
Defense date2022
PublisherAssociation for Computational Linguistics
Rights accessOpen Access
This work is protected by the corresponding intellectual and industrial property rights.
Except where otherwise noted, its contents are licensed under a Creative Commons license
:
Attribution 4.0 International
Abstract
Transformers have achieved state-of-the-art results across multiple NLP tasks. However, the self-attention mechanism complexity scales quadratically with the sequence length, creating an obstacle for tasks involving long sequences, like in the speech domain. In this paper, we discuss the usefulness of self-attention for Direct Speech Translation. First, we analyze the layer-wise token contributions in the self-attention of the encoder, unveiling local diagonal patterns. To prove that some attention weights are avoidable, we propose to substitute the standard self-attention with a local efficient one, setting the amount of context used based on the results of the analysis. With this approach, our model matches the baseline performance, and improves the efficiency by skipping the computation of those weights that standard attention discards.
CitationAlastruey, B. [et al.]. On the locality of attention in direct speech translation. A: Annual Meeting of the Association for Computational Linguistics: Student Research Workshop. "ACL 2022, The 60th Annual Meeting of the Association for Computational Linguistics: proceedings of the Student Research Workshop: May 22-27, 2022". Stroudsburg, PA: Association for Computational Linguistics, 2022, p. 402-412. ISBN 978-1-955917-23-0. DOI 10.18653/v1/2022.acl-srw.32.
ISBN978-1-955917-23-0
Publisher versionhttps://aclanthology.org/2022.acl-srw.32/
Collections
- Departament de Ciències de la Computació - Ponències/Comunicacions de congressos [1.325]
- Doctorat en Teoria del Senyal i Comunicacions - Ponències/Comunicacions de congressos [291]
- Doctorat en Intel·ligència Artificial - Ponències/Comunicacions de congressos [58]
- VEU - Grup de Tractament de la Parla - Ponències/Comunicacions de congressos [438]
Files | Description | Size | Format | View |
---|---|---|---|---|
2022.acl-srw.32.pdf | 5,098Mb | View/Open |