Comparing last-level cache designs for CMP architectures
View/Open
a2-vega.pdf (321,5Kb) (Restricted access)
Request copy
Què és aquest botó?
Aquest botó permet demanar una còpia d'un document restringit a l'autor. Es mostra quan:
- Disposem del correu electrònic de l'autor
- El document té una mida inferior a 20 Mb
- Es tracta d'un document d'accés restringit per decisió de l'autor o d'un document d'accés restringit per política de l'editorial
Cita com:
hdl:2117/11974
Document typeConference report
Defense date2010
Rights accessRestricted access - publisher's policy
All rights reserved. This work is protected by the corresponding intellectual and industrial
property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public
communication or transformation of this work are prohibited without permission of the copyright holder
Abstract
The emergence of hardware accelerators, such as graphics processing units (GPUs), has challenged the interaction between
processing elements (PEs) and main memory. In architectures like the Cell/B.E. or GPUs, the PEs incorporate local memories which are fed with data transferred from memory using direct memory accesses (DMAs). We expect
that chip multiprocessors (CMP) with DMA-managed local memories will become more popular in the near future due to the increasing interest in accelerators. In this work we show that, in that case, the way cache hierarchies are conceived should be revised.
Particularly for last-level caches, the norm today is to use latency-aware organizations. For instance, in dynamic nonuniform cache architectures (D-NUCA) data is migrated
closer to the requester processor to optimize latency. However, in DMA-based scenarios, the memory system latency
becomes irrelevant compared with the time consumed for moving the DMA data, so latency-aware designs are, a priori, inefficient. In this work, we revisit the last-level cache designs in DMA-based CMP architectures with master-worker execution.
Two scenarios are evaluated. First, we consider a set of private caches with data replication across them, where coherency of the copies is ensured through a hardware protocol.
In this scenario, a PE has a nearby copy of the datum, improving cache access latency. Second, we consider a partitioned cache, where the allocation of a datum to a cache block is determined based on its physical address.
In this scenario, there are no copies of data, and access to a datum has a variable latency. In contrast with traditional
load/store-based architectures, we found that the partitioned last-level cache scheme outperforms the cache with data replication for DMA-based scenarios.
CitationVega, A. [et al.]. Comparing last-level cache designs for CMP architectures. A: International Forum on Next Generation Multicore/Manycore Tecnologies. "2nd International Forum on Next Generation Multicore/Manycore Tecnologies". Saint-Malo: 2010, p. 1-11.
ISBN978-1-4503-0008-7
Files | Description | Size | Format | View |
---|---|---|---|---|
a2-vega.pdf![]() | 321,5Kb | Restricted access |