Show simple item record

dc.contributor.authorGarcía Vidal, Jorge
dc.contributor.authorCorbal San Adrián, Jesús
dc.contributor.authorCerdà Alabern, Llorenç
dc.contributor.authorValero Cortés, Mateo
dc.contributor.otherUniversitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors
dc.identifier.citationGarcía, J., Corbal, J., Cerdà, L., Valero, M. Design and implementation of high-performance memory systems for future packet buffers. A: Annual IEEE/ACM International Symposium on Microarchitecture. "36th Annual IEEE/ACM International Symposium on Microarchitecture, 2003, MICRO-36: proceedings". San Diego, California: Institute of Electrical and Electronics Engineers (IEEE), 2003, p. 372-384.
dc.description.abstractIn this paper, we address the design of a future high-speed router that supports line rates as high as OC-3072 (160 Gb/s), around one hundred ports and several service classes. Building such a high-speed router would raise many technological problems, one of them being the packet buffer design, mainly because in router design it is important to provide worst-case bandwidth guarantees and not just average-case optimizations. A previous packet buffer design provides worst-case bandwidth guarantees by using a hybrid SRAM/DRAM approach. Next-generation routers need to support hundreds of interfaces (i.e., ports and service classes). Unfortunately, high bandwidth for hundreds of interfaces requires the previous design to use large SRAMs which become a bandwidth bottleneck. The key observation we make is that the SRAM size is proportional to the DRAM access time but we can reduce the effective DRAM access time by overlapping multiple accesses to different banks, allowing us to reduce the SRAM size. The key challenge is that to keep the worst-case bandwidth guarantees, we need to guarantee that there are no bank conflicts while the accesses are in flight. We guarantee bank conflicts by reordering the DRAM requests using a modern issue-queue-like mechanism. Because our design may lead to fragmentation of memory across packet buffer queues, we propose to share the DRAM space among multiple queues by renaming the queue slots. To the best of our knowledge, the design proposed in this paper is the fastest buffer design using commodity DRAM to be published to date.
dc.format.extent13 p.
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.subjectÀrees temàtiques de la UPC::Informàtica::Arquitectura de computadors
dc.subject.lcshRouting protocols (Computer network protocols)
dc.subject.otherBuffer storage
dc.subject.otherMemory architecture
dc.subject.otherPacket switching
dc.subject.otherDRAM chips
dc.subject.otherSRAM chips
dc.titleDesign and implementation of high-performance memory systems for future packet buffers
dc.typeConference report
dc.subject.lemacEncaminadors (Xarxes d'ordinadors)
dc.contributor.groupUniversitat Politècnica de Catalunya. CNDS - Xarxes de Computadors i Sistemes Distribuïts
dc.contributor.groupUniversitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions
dc.description.peerreviewedPeer Reviewed
dc.rights.accessOpen Access
dc.description.versionPostprint (published version)
upcommons.citation.authorGarcía, J., Corbal, J., Cerdà, L., Valero, M.
upcommons.citation.contributorAnnual IEEE/ACM International Symposium on Microarchitecture
upcommons.citation.pubplaceSan Diego, California
upcommons.citation.publicationName36th Annual IEEE/ACM International Symposium on Microarchitecture, 2003, MICRO-36: proceedings

Files in this item


This item appears in the following Collection(s)

Show simple item record

All rights reserved. This work is protected by the corresponding intellectual and industrial property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public communication or transformation of this work are prohibited without permission of the copyright holder