Reducing cache coherence traffic with a NUMA-aware runtime approach

Cita com:
hdl:2117/116365
Document typeArticle
Defense date2018-05
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial
property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public
communication or transformation of this work are prohibited without permission of the copyright holder
Abstract
Cache Coherent NUMA (ccNUMA) architectures are a widespread paradigm due to the benefits they provide for scaling core count and memory capacity. Also, the flat memory address space they offer considerably improves programmability. However, ccNUMA architectures require sophisticated and expensive cache coherence protocols to enforce correctness during parallel executions, which trigger a significant amount of on- and off-chip traffic in the system. This paper analyses how coherence traffic may be best constrained in a large, real ccNUMA platform comprising 288 cores through the use of a joint hardware/software approach. For several benchmarks, we study coherence traffic in detail under the influence of an added hierarchical cache layer in the directory protocol combined with runtime managed NUMA-aware scheduling and data allocation techniques to make most efficient use of the added hardware. The effectiveness of this joint approach is demonstrated by speedups of 3.14× to 9.97× and coherence traffic reductions of up to 99% in comparison to NUMA-oblivious scheduling and data allocation.
CitationCaheny, P., Alvarez, L., Derradji, S., Valero, M., Moreto, M., Casas, M. Reducing cache coherence traffic with a NUMA-aware runtime approach. "IEEE transactions on parallel and distributed systems", Maig 2018, vol. 29, núm. 5, p. 1174-1187.
ISSN1045-9219
Publisher versionhttp://ieeexplore.ieee.org/document/8239832/
Files | Description | Size | Format | View |
---|---|---|---|---|
Reducing+Cache+Coherence+Traffic+with+a.pdf | 1,988Mb | View/Open |