Improving the interoperability between MPI and task-based programming models
Visualitza/Obre
Cita com:
hdl:2117/125347
Tipus de documentText en actes de congrés
Data publicació2018
EditorAssociation for Computing Machinery (ACM)
Condicions d'accésAccés obert
Tots els drets reservats. Aquesta obra està protegida pels drets de propietat intel·lectual i
industrial corresponents. Sense perjudici de les exempcions legals existents, queda prohibida la seva
reproducció, distribució, comunicació pública o transformació sense l'autorització del titular dels drets
ProjecteCOMPUTACION DE ALTAS PRESTACIONES VII (MINECO-TIN2015-65316-P)
MIRACLS - Multi Ion Reflection Apparatus for Collinear Laser Spectroscopy of radionuclides (EC-H2020-679038)
ECO-H-MEM - Advanced Ecosystem for Broad Heterogeneous Memory Usage (EC-H2020-749516)
COMPUTACION DE ALTAS PRESTACIONES VII (MINECO-TIN2015-65316-P)
BARCELONA SUPERCOMPUTING CENTER - CENTRO. NACIONAL DE SUPERCOMPUTACION (MINECO-SEV-2015-0493)
MIRACLS - Multi Ion Reflection Apparatus for Collinear Laser Spectroscopy of radionuclides (EC-H2020-679038)
ECO-H-MEM - Advanced Ecosystem for Broad Heterogeneous Memory Usage (EC-H2020-749516)
COMPUTACION DE ALTAS PRESTACIONES VII (MINECO-TIN2015-65316-P)
BARCELONA SUPERCOMPUTING CENTER - CENTRO. NACIONAL DE SUPERCOMPUTACION (MINECO-SEV-2015-0493)
Abstract
In this paper we propose an API to pause and resume task execution depending on external events. We leverage this generic API to improve the interoperability between MPI synchronous communication primitives and tasks. When an MPI operation blocks, the task running is paused so that the runtime system can schedule a new task on the core that became idle. Once the MPI operation is completed, the paused task is put again on the runtime system's ready queue. We expose our proposal through a new MPI threading level which we implement through two approaches.
The first approach is an MPI wrapper library that works with any MPI implementation by intercepting MPI synchronous calls, implementing them on top of their asynchronous counterparts. In this case, the task-based runtime system is also extended to periodically check for pending MPI operations and resume the corresponding tasks once MPI operations complete. The second approach consists in directly modifying the MPICH runtime system, a well-known implementation of MPI, to directly call the pause/resume API when a synchronous MPI operation blocks and completes, respectively.
Our experiments reveal that this proposal not only simplifies the development of hybrid MPI+OpenMP applications that naturally overlap computation and communication phases; it also improves application performance and scalability by removing artificial dependencies across communication tasks.
CitacióSala, K., Bellón, J., Farré, P., Teruel, X., Pérez, J., Peña, A., Holmes, D., Beltran, V., Labarta, J. Improving the interoperability between MPI and task-based programming models. A: European MPI Users' Group Meeting. "Proceedings of the 25th European MPI Users' Group Meeting: Barcelona, Spain, September 23-26, 2018". New York: Association for Computing Machinery (ACM), 2018, p. 1-11.
ISBN978-1-4503-6492-8
Versió de l'editorhttps://dl.acm.org/citation.cfm?id=3236382
Fitxers | Descripció | Mida | Format | Visualitza |
---|---|---|---|---|
interop-paper-1.pdf | 720,1Kb | Visualitza/Obre |