Comparing last-level cache designs for CMP architectures
Visualitza/Obre
a2-vega.pdf (321,5Kb) (Accés restringit)
Sol·licita una còpia a l'autor
Què és aquest botó?
Aquest botó permet demanar una còpia d'un document restringit a l'autor. Es mostra quan:
- Disposem del correu electrònic de l'autor
- El document té una mida inferior a 20 Mb
- Es tracta d'un document d'accés restringit per decisió de l'autor o d'un document d'accés restringit per política de l'editorial
Cita com:
hdl:2117/11974
Tipus de documentText en actes de congrés
Data publicació2010
Condicions d'accésAccés restringit per política de l'editorial
Tots els drets reservats. Aquesta obra està protegida pels drets de propietat intel·lectual i
industrial corresponents. Sense perjudici de les exempcions legals existents, queda prohibida la seva
reproducció, distribució, comunicació pública o transformació sense l'autorització del titular dels drets
Abstract
The emergence of hardware accelerators, such as graphics processing units (GPUs), has challenged the interaction between
processing elements (PEs) and main memory. In architectures like the Cell/B.E. or GPUs, the PEs incorporate local memories which are fed with data transferred from memory using direct memory accesses (DMAs). We expect
that chip multiprocessors (CMP) with DMA-managed local memories will become more popular in the near future due to the increasing interest in accelerators. In this work we show that, in that case, the way cache hierarchies are conceived should be revised.
Particularly for last-level caches, the norm today is to use latency-aware organizations. For instance, in dynamic nonuniform cache architectures (D-NUCA) data is migrated
closer to the requester processor to optimize latency. However, in DMA-based scenarios, the memory system latency
becomes irrelevant compared with the time consumed for moving the DMA data, so latency-aware designs are, a priori, inefficient. In this work, we revisit the last-level cache designs in DMA-based CMP architectures with master-worker execution.
Two scenarios are evaluated. First, we consider a set of private caches with data replication across them, where coherency of the copies is ensured through a hardware protocol.
In this scenario, a PE has a nearby copy of the datum, improving cache access latency. Second, we consider a partitioned cache, where the allocation of a datum to a cache block is determined based on its physical address.
In this scenario, there are no copies of data, and access to a datum has a variable latency. In contrast with traditional
load/store-based architectures, we found that the partitioned last-level cache scheme outperforms the cache with data replication for DMA-based scenarios.
CitacióVega, A. [et al.]. Comparing last-level cache designs for CMP architectures. A: International Forum on Next Generation Multicore/Manycore Tecnologies. "2nd International Forum on Next Generation Multicore/Manycore Tecnologies". Saint-Malo: 2010, p. 1-11.
ISBN978-1-4503-0008-7
Fitxers | Descripció | Mida | Format | Visualitza |
---|---|---|---|---|
a2-vega.pdf | 321,5Kb | Accés restringit |