Show simple item record

dc.contributor.authorOzen, Guray
dc.contributor.authorMateo, Sergi
dc.contributor.authorAyguadé Parra, Eduard
dc.contributor.authorLabarta, Jesús
dc.contributor.authorBeyer, James B.
dc.contributor.otherUniversitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors
dc.date.accessioned2016-10-31T17:36:59Z
dc.date.issued2016
dc.identifier.citationOzen, G., Mateo, S., Ayguade, E., Labarta, J., Beyer, J. Multiple target task sharing support for the OpenMP accelerator model. A: International Workshop on OpenMP. "OpenMP: memory, devices, and tasks: 12th International Workshop on OpenMP: IWOMP 2016: Nara, Japan: October 5-7, 2016: proceedings". Nara: Springer, 2016, p. 268-280.
dc.identifier.isbn978-3-319-45549-5
dc.identifier.urihttp://hdl.handle.net/2117/91300
dc.description.abstractThe use of GPU accelerators is becoming common in HPC platforms due to the their effective performance and energy efficiency. In addition, new generations of multicore processors are being designed with wider vector units and/or larger hardware thread counts, also contributing to the peak performance of the whole system. Although current directive–based paradigms, such as OpenMP or OpenACC, support both accelerators and multicore-based hosts, they do not provide an effective and efficient way to concurrently use them, usually resulting in accelerated programs in which the potential computational performance of the host is not exploited. In this paper we propose an extension to the OpenMP 4.5 directive-based programming model to support the specification and execution of multiple instances of task regions on different devices (i.e. accelerators in conjunction with the vector and heavily multithreaded capabilities in multicore processors). The compiler is responsible for the generation of device-specific code for each device kind, delegating to the runtime system the dynamic schedule of the tasks to the available devices. The new proposed clause conveys useful insight to guide the scheduler while keeping a clean, abstract and machine independent programmer interface. The potential of the proposal is analyzed in a prototype implementation in the OmpSs compiler and runtime infrastructure. Performance evaluation is done using three kernels (N-Body, tiled matrix multiply and Stream) on different GPU-capable systems based on ARM, Intel x86 and IBM Power8. From the evaluation we observe speed–ups in the 8–20% range compared to versions in which only the GPU is used, reaching 96 % of the additional peak performance thanks to the reduction of data transfers and the benefits introduced by the OmpSs NUMA-aware scheduler.
dc.description.sponsorshipThis work is partially supported by the IBM/BSC Deep Learning Center Initiative, by the Spanish Government through Programa Severo Ochoa (SEV-2015-0493), by the Spanish Ministry of Science and Technology through TIN2015-65316-P project and by the Generalitat de Catalunya (contract 2014-SGR-1051).
dc.format.extent13 p.
dc.language.isoeng
dc.publisherSpringer
dc.subjectÀrees temàtiques de la UPC::Informàtica::Arquitectura de computadors
dc.subjectÀrees temàtiques de la UPC::Informàtica::Arquitectura de computadors::Arquitectures paral·leles
dc.subject.lcshComputer architecture
dc.subject.lcshParallel processing (Electronic computers)
dc.subject.otherGPU accelerators
dc.subject.otherOpenMP accelerator model
dc.subject.otherMercurium ACCelerator compiler
dc.titleMultiple target task sharing support for the OpenMP accelerator model
dc.typeConference report
dc.subject.lemacArquitectura d'ordinadors
dc.subject.lemacProcessament en paral·lel (Ordinadors)
dc.contributor.groupUniversitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions
dc.identifier.doi10.1007/978-3-319-45550-1_19
dc.description.peerreviewedPeer Reviewed
dc.relation.publisherversionhttp://link.springer.com/chapter/10.1007%2F978-3-319-45550-1_19
dc.rights.accessRestricted access - publisher's policy
drac.iddocument19162295
dc.description.versionPostprint (published version)
dc.relation.projectidinfo:eu-repo/grantAgreement/MINECO/1PE/TIN2015-65316-P
dc.date.lift10000-01-01
upcommons.citation.authorOzen, G., Mateo, S., Ayguade, E., Labarta, J., Beyer, J.
upcommons.citation.contributorInternational Workshop on OpenMP
upcommons.citation.pubplaceNara
upcommons.citation.publishedtrue
upcommons.citation.publicationNameOpenMP: memory, devices, and tasks: 12th International Workshop on OpenMP: IWOMP 2016: Nara, Japan: October 5-7, 2016: proceedings
upcommons.citation.startingPage268
upcommons.citation.endingPage280


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

All rights reserved. This work is protected by the corresponding intellectual and industrial property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public communication or transformation of this work are prohibited without permission of the copyright holder