Instruction fetch architectures and code layout optimizations
Tipo de documentoArtículo
Fecha de publicación2001-11
Condiciones de accesoAcceso abierto
The design of higher performance processors has been following two major trends: increasing the pipeline depth to allow faster clock rates, and widening the pipeline to allow parallel execution of more instructions. Designing a higher performance processor implies balancing all the pipeline stages to ensure that overall performance is not dominated by any of them. This means that a faster execution engine also requires a faster fetch engine, to ensure that it is possible to read and decode enough instructions to keep the pipeline full and the functional units busy. This paper explores the challenges faced by the instruction fetch stage for a variety of processor designs, from early pipelined processors, to the more aggressive wide issue superscalars. We describe the different fetch engines proposed in the literature, the performance issues involved, and some of the proposed improvements. We also show how compiler techniques that optimize the layout of the code in memory can be used to improve the fetch performance of the different engines described Overall, we show how instruction fetch has evolved from fetching one instruction every few cycles, to fetching one instruction per cycle, to fetching a full basic block per cycle, to several basic blocks per cycle: the evolution of the mechanism surrounding the instruction cache, and the different compiler optimizations used to better employ these mechanisms.
CitaciónRamírez, A., Larriba, J., Valero, M. Instruction fetch architectures and code layout optimizations. "Proceedings of the IEEE", Novembre 2001, vol. 89, núm. 11, p. 1588-1609.
Versión del editorhttp://ieeexplore.ieee.org/document/964440/