Mostra el registre d'ítem simple
Effective instruction prefetching via fetch prestaging
dc.contributor.author | Falcón Samper, Ayose Jesús |
dc.contributor.author | Ramírez Bellido, Alejandro |
dc.contributor.author | Valero Cortés, Mateo |
dc.contributor.other | Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors |
dc.date.accessioned | 2017-05-18T07:14:20Z |
dc.date.available | 2017-05-18T07:14:20Z |
dc.date.issued | 2005 |
dc.identifier.citation | Falcón, A., Ramírez, A., Valero, M. Effective instruction prefetching via fetch prestaging. A: IEEE International Parallel and Distributed Processing Symposium. "19th IEEE International Parallel and Distributed Processing Syposium: April 4-8, 2005, Denver, Colorado: proceedings". Denver, Colorado: Institute of Electrical and Electronics Engineers (IEEE), 2005, p. 1-10. |
dc.identifier.isbn | 0-7695-2312-9 |
dc.identifier.uri | http://hdl.handle.net/2117/104589 |
dc.description.abstract | As technological process shrinks and clock rate increases, instruction caches can no longer be accessed in one cycle. Alternatives are implementing smaller caches (with higher miss rate) or large caches with a pipelined access (with higher branch misprediction penalty). In both cases, the performance obtained is far from the obtained by an ideal large cache with one-cycle access. In this paper we present cache line guided prestaging (CLGP), a novel mechanism that overcomes the limitations of current instruction cache implementations. CLGP employs prefetching to charge future cache lines into a set of fast prestage buffers. These buffers are managed efficiently by the CLGP algorithm, trying to fetch from them as much as possible. Therefore, the number of fetches served by the main instruction cache is highly reduced, and so the negative impact of its access latency on the overall performance. With the best CLGP configuration using a 4 KB I-cache, speedups of 3.5% (at 0.09 /spl mu/m) and 12.5% (at 0.045 /spl mu/m) are obtained over an equivalent fetch directed prefetching configuration, and 39% (at 0.09 /spl mu/m) and 48% (at 0.045 /spl mu/m) over using a pipelined instruction cache without prefetching. Moreover, our results show that CLGP with a 2.5 KB of total cache budget can obtain a similar performance than using a 64 KB pipelined I-cache without prefetching, that is equivalent performance at 6.4X our hardware budget. |
dc.format.extent | 10 p. |
dc.language.iso | eng |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) |
dc.subject | Àrees temàtiques de la UPC::Informàtica::Arquitectura de computadors |
dc.subject.lcsh | Microprocessors -- Design and construction |
dc.subject.other | Instruction sets |
dc.subject.other | Cache storage |
dc.subject.other | Pipeline processing |
dc.title | Effective instruction prefetching via fetch prestaging |
dc.type | Conference report |
dc.subject.lemac | Microprocessadors -- Disseny i construcció |
dc.contributor.group | Universitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions |
dc.identifier.doi | 10.1109/IPDPS.2005.188 |
dc.description.peerreviewed | Peer Reviewed |
dc.relation.publisherversion | http://ieeexplore.ieee.org/document/1419838/ |
dc.rights.access | Open Access |
local.identifier.drac | 2421099 |
dc.description.version | Postprint (published version) |
local.citation.author | Falcón, A.; Ramírez, A.; Valero, M. |
local.citation.contributor | IEEE International Parallel and Distributed Processing Symposium |
local.citation.pubplace | Denver, Colorado |
local.citation.publicationName | 19th IEEE International Parallel and Distributed Processing Syposium: April 4-8, 2005, Denver, Colorado: proceedings |
local.citation.startingPage | 1 |
local.citation.endingPage | 10 |