Mostra el registre d'ítem simple
Integrating blocking and non-blocking MPI primitives with task-based programming models
dc.contributor.author | Sala Penadés, Kevin |
dc.contributor.author | Teruel García, Xavier |
dc.contributor.author | Pérez Cáncer, Josep Maria |
dc.contributor.author | Peña, Antonio J. |
dc.contributor.author | Beltran, Vicenç |
dc.contributor.author | Labarta Mancho, Jesús José |
dc.contributor.other | Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors |
dc.contributor.other | Barcelona Supercomputing Center |
dc.date.accessioned | 2020-05-04T09:38:37Z |
dc.date.available | 2020-12-21T01:33:19Z |
dc.date.issued | 2019-07 |
dc.identifier.citation | Sala, K. [et al.]. Integrating blocking and non-blocking MPI primitives with task-based programming models. "Parallel computing", Juliol 2019, vol. 85, p. 153-166. |
dc.identifier.issn | 0167-8191 |
dc.identifier.other | https://arxiv.org/pdf/1901.03271.pdf |
dc.identifier.uri | http://hdl.handle.net/2117/186108 |
dc.description.abstract | In this paper we present the Task-Aware MPI library (TAMPI) that integrates both blocking and non-blocking MPI primitives with task-based programming models. The TAMPI library leverages two new runtime APIs to improve both programmability and performance of hybrid applications. The first API allows to pause and resume the execution of a task depending on external events. This API is used to improve the interoperability between blocking MPI communication primitives and tasks. When an MPI operation executed inside a task blocks, the task running is paused so that the runtime system can schedule a new task on the core that became idle. Once the blocked MPI operation is completed, the paused task is put again on the runtime system’s ready queue, so eventually it will be scheduled again and its execution will be resumed. The second API defers the release of dependencies associated with a task completion until some external events are fulfilled. This API is composed only of two functions, one to bind external events to a running task and another function to notify about the completion of external events previously bound. TAMPI leverages this API to bind non-blocking MPI operations with tasks, deferring the release of their task dependencies until both task execution and all its bound MPI operations are completed. Our experiments reveal that the enhanced features of TAMPI not only simplify the development of hybrid MPI+OpenMP applications that use blocking or non-blocking MPI primitives but they also naturally overlap computation and communication phases, which improves application performance and scalability by removing artificial dependencies across communication tasks. |
dc.description.sponsorship | This work has been developed with the support of the European Union H2020 Programme through both the INTERTWinE project (agreement no. 671602) and the Marie Skłodowska-Curie grant (agreement no. 749516); the Spanish Ministry of Economy and Competitiveness through the Severo Ochoa Program (SEV-2015-0493); the Spanish Ministry of Science and Innovation (TIN2015-65316-P) and the Generalitat de Catalunya (2017-SGR1414). |
dc.format.extent | 14 p. |
dc.language.iso | eng |
dc.rights | Attribution-NonCommercial-NoDerivs 3.0 Spain |
dc.rights | ©2018 Elsevier |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ |
dc.subject | Àrees temàtiques de la UPC::Informàtica::Programació |
dc.subject.lcsh | Application program interfaces (Computer software) |
dc.subject.other | MPI |
dc.subject.other | OpenMP |
dc.subject.other | OmpSs-2 |
dc.subject.other | TAMPI |
dc.subject.other | Interoperability |
dc.subject.other | Task |
dc.title | Integrating blocking and non-blocking MPI primitives with task-based programming models |
dc.type | Article |
dc.subject.lemac | Interfícies de programació d'aplicacions (Programari) |
dc.contributor.group | Universitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions |
dc.identifier.doi | 10.1016/j.parco.2018.12.008 |
dc.description.peerreviewed | Peer Reviewed |
dc.relation.publisherversion | https://www.sciencedirect.com/science/article/pii/S0167819118303326 |
dc.rights.access | Open Access |
local.identifier.drac | 28076971 |
dc.description.version | Postprint (author's final draft) |
dc.relation.projectid | info:eu-repo/grantAgreement/MINECO//TIN2015-65316-P/ES/COMPUTACION DE ALTAS PRESTACIONES VII/ |
dc.relation.projectid | info:eu-repo/grantAgreement/AGAUR/2017 SGR 1414 |
dc.relation.projectid | info:eu-repo/grantAgreement/EC/H2020/671602/EU/Programming Model INTERoperability ToWards Exascale (INTERTWinE)/INTERTWINE |
dc.relation.projectid | info:eu-repo/grantAgreement/EC/H2020/749516/EU/Advanced Ecosystem for Broad Heterogeneous Memory Usage/ECO-H-MEM |
local.citation.author | Sala, K.; Teruel, X.; Pérez, J.; Peña, A.; Beltran, V.; Labarta, J. |
local.citation.publicationName | Parallel computing |
local.citation.volume | 85 |
local.citation.startingPage | 153 |
local.citation.endingPage | 166 |
Fitxers d'aquest items
Aquest ítem apareix a les col·leccions següents
-
Articles de revista [318]
-
Articles de revista [1.050]
-
Articles de revista [382]