Exploring the interoperability of remote GPGPU virtualization using rCUDA and directive-based programming models
Rights accessRestricted access - publisher's policy (embargoed until 2017-08-30)
Directive-based programming models, such as OpenMP, OpenACC, and OmpSs, enable users to accelerate applications by using coprocessors with little effort. These devices offer significant computing power, but their use can introduce two problems: an increase in the total cost of ownership and their underutilization because not all codes match their architecture. Remote accelerator virtualization frameworks address those problems. In particular, rCUDA provides transparent access to any graphic processor unit installed in a cluster, reducing the number of accelerators and increasing their utilization ratio. Joining these two technologies, directive-based programming models and rCUDA, is thus highly appealing. In this work, we study the integration of OmpSs and OpenACC with rCUDA, describing and analyzing several applications over three different hardware configurations that include two InfiniBand interconnections and three NVIDIA accelerators. Our evaluation reveals favorable performance results, showing low overhead and similar scaling factors when using remote accelerators instead of local devices.
CitationCastelló, Adrian [et al.]. Exploring the interoperability of remote GPGPU virtualization using rCUDA and directive-based programming models. "The Journal of Supercomputing", 21 Juny 2016.
|Exploring the Interoperability of Remote GPGPU.pdf||689.4Kb||Restricted access|