Page size aware cache prefetching
Cita com:
hdl:2117/379247
Document typeConference report
Defense date2022
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial
property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public
communication or transformation of this work are prohibited without permission of the copyright holder
ProjectBSC - COMPUTACION DE ALTAS PRESTACIONES VIII (AEI-PID2019-107255GB-C21)
UPC-COMPUTACION DE ALTAS PRESTACIONES VIII (AEI-PID2019-107255GB-C22)
DEEP-SEA - DEEP – SOFTWARE FOR EXASCALE ARCHITECTURES (EC-H2020-955606)
UPC-COMPUTACION DE ALTAS PRESTACIONES VIII (AEI-PID2019-107255GB-C22)
DEEP-SEA - DEEP – SOFTWARE FOR EXASCALE ARCHITECTURES (EC-H2020-955606)
Abstract
The increase in working set sizes of contemporary applications outpaces the growth in cache sizes, resulting in frequent main memory accesses that deteriorate system per- formance due to the disparity between processor and memory speeds. Prefetching data blocks into the cache hierarchy ahead of demand accesses has proven successful at attenuating this bottleneck. However, spatial cache prefetchers operating in the physical address space leave significant performance on the table by limiting their pattern detection within 4KB physical page boundaries when modern systems use page sizes larger than 4KB to mitigate the address translation overheads. This paper exploits the high usage of large pages in modern systems to increase the effectiveness of spatial cache prefetch- ing. We design and propose the Page-size Propagation Module (PPM), a µarchitectural scheme that propagates the page size information to the lower-level cache prefetchers, enabling safe prefetching beyond 4KB physical page boundaries when the accessed blocks reside in large pages, at the cost of augmenting the first-level caches’ Miss Status Holding Register (MSHR) entries with one additional bit. PPM is compatible with any cache prefetcher without implying design modifications. We capitalize on PPM’s benefits by designing a module that consists of two page size aware prefetchers that inherently use different page sizes to drive prefetching. The composite module uses adaptive logic to dynamically enable the most appropriate page size aware prefetcher. Finally, we show that the proposed designs are transparent to which cache prefetcher is used. We apply the proposed page size exploitation techniques to four state-of-the-art spatial cache prefetchers. Our evalua- tion shows that our proposals improve single-core geomean performance by up to 8.1% (2.1% at minimum) over the original implementation of the considered prefetchers, across 80 memory-intensive workloads. In multi-core contexts, we report geomean speedups up to 7.7% across different cache prefetchers and core configurations.
CitationVavouliotis, G. [et al.]. Page size aware cache prefetching. A: Annual IEEE/ACM International Symposium on Microarchitecture. "2022 55th Annual IEEE/ACM International Symposium on Microarchitecture: 1-5 October 2022, Chicago, Illinois: proceedings". Institute of Electrical and Electronics Engineers (IEEE), 2022, p. 956-974. ISBN 978-1-6654-6272-3. DOI 10.1109/MICRO56248.2022.00070.
ISBN978-1-6654-6272-3
Publisher versionhttps://ieeexplore.ieee.org/document/9923823
Files | Description | Size | Format | View |
---|---|---|---|---|
Page_Size_Aware_Cache_Prefetching_CameraReady.pdf | 2,121Mb | View/Open |