Mostra el registre d'ítem simple

dc.contributor.authorMarjanovic, Vladimir
dc.contributor.authorLabarta Mancho, Jesús José
dc.contributor.authorAyguadé Parra, Eduard
dc.contributor.authorValero Cortés, Mateo
dc.contributor.otherUniversitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors
dc.date.accessioned2014-11-27T13:27:25Z
dc.date.created2010
dc.date.issued2010
dc.identifier.citationMarjanovic, V. [et al.]. Overlapping communication and computation by using a hybrid MPI/SMPSs approach. A: ACM International Conference on Supercomputing. "2010 International Conference on Supercomputing: June 2–4, 2010, Tsukuba, Ibaraki, Japan: proceedings". Tsukuba: ACM Press. Association for Computing Machinery, 2010, p. 5-16.
dc.identifier.isbn978-1-4503-0018-6
dc.identifier.urihttp://hdl.handle.net/2117/24872
dc.description.abstractCommunication overhead is one of the dominant factors affecting performance in high-end computing systems. To reduce the negative impact of communication, programmers overlap communication and computation by using asynchronous communication primitives. This increases code complexity, requiring more development effort and making less readable programs. This paper presents the hybrid use of MPI and SMPSs (SMP superscalar, a task-based shared-memory programming model), allowing the programmer to easily introduce the asynchrony necessary to overlap communication and computation. We also describe implementation issues in the SMPSs run time that support its efficient interoperation with MPI. We demonstrate the hybrid use of MPI/SMPSs with four application kernels (matrix multiply, Jacobi, conjugate gradient and NAS BT) and with the high-performance LINPACK benchmark. For the application kernels, the hybrid MPI/SMPSs versions significantly improve the performance of the pure MPI counterparts. For LINPACK we get close to the asymptotic performance at relatively small problem sizes and still get significant benefits at large problem sizes. In addition, the hybrid MPI/SMPSs approach substantially reduces code complexity and is less sensitive to network bandwidth and operating system noise than the pure MPI versions.
dc.format.extent12 p.
dc.language.isoeng
dc.publisherACM Press. Association for Computing Machinery
dc.subjectÀrees temàtiques de la UPC::Informàtica::Llenguatges de programació
dc.subjectÀrees temàtiques de la UPC::Informàtica
dc.subject.lcshHigh performance computing
dc.subject.lcshSupercomputers
dc.subject.otherParallel programming model
dc.subject.otherMPI
dc.subject.otherHybrid MPI/SMPSs
dc.subject.otherLINPACK
dc.titleOverlapping communication and computation by using a hybrid MPI/SMPSs approach
dc.typeConference report
dc.subject.lemacCàlcul intensiu (Informàtica)
dc.subject.lemacSupercomputadors
dc.contributor.groupUniversitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions
dc.identifier.doi10.1145/1810085.1810091
dc.description.peerreviewedPeer Reviewed
dc.relation.publisherversionhttps://dl.acm.org/doi/10.1145/1810085.1810091
dc.rights.accessRestricted access - publisher's policy
local.identifier.drac15117223
dc.description.versionPostprint (published version)
dc.date.lift10000-01-01
local.citation.authorMarjanovic, V.; Labarta, J.; Ayguade, E.; Valero, M.
local.citation.contributorACM International Conference on Supercomputing
local.citation.pubplaceTsukuba
local.citation.publicationName2010 International Conference on Supercomputing: June 2–4, 2010, Tsukuba, Ibaraki, Japan: proceedings
local.citation.startingPage5
local.citation.endingPage16


Fitxers d'aquest items

Imatge en miniatura

Aquest ítem apareix a les col·leccions següents

Mostra el registre d'ítem simple