Show simple item record

dc.contributor.authorVega, Carlos
dc.contributor.authorZazo, Jose F.
dc.contributor.authorMeyer, Hugo
dc.contributor.authorZyulkyarov, Ferad
dc.contributor.authorLopez-Buedo, S.
dc.contributor.authorAracil, Javier
dc.contributor.otherBarcelona Supercomputing Center
dc.date.accessioned2018-03-27T14:57:24Z
dc.date.available2018-03-27T14:57:24Z
dc.date.issued2018-02-15
dc.identifier.citationVega, C. [et al.]. Diluting the Scalability Boundaries: Exploring the Use of Disaggregated Architectures for High-Level Network Data Analysis. A: "High Performance Computing and Communications; IEEE 15th International Conference on Smart City; IEEE 3rd International Conference on Data Science and Systems (HPCC/SmartCity/DSS), 2017 IEEE 19th International Conference on". IEEE, 2018, p. 340-347.
dc.identifier.isbn978-1-5386-2588-0
dc.identifier.urihttp://hdl.handle.net/2117/115863
dc.description.abstractTraditional data centers are designed with a rigid architecture of fit-for-purpose servers that provision resources beyond the average workload in order to deal with occasional peaks of data. Heterogeneous data centers are pushing towards more cost-efficient architectures with better resource provisioning. In this paper we study the feasibility of using disaggregated architectures for intensive data applications, in contrast to the monolithic approach of server-oriented architectures. Particularly, we have tested a proactive network analysis system in which the workload demands are highly variable. In the context of the dReDBox disaggregated architecture, the results show that the overhead caused by using remote memory resources is significant, between 66% and 80%, but we have also observed that the memory usage is one order of magnitude higher for the stress case with respect to average workloads. Therefore, dimensioning memory for the worst case in conventional systems will result in a notable waste of resources. Finally, we found that, for the selected use case, parallelism is limited by memory. Therefore, using a disaggregated architecture will allow for increased parallelism, which, at the same time, will mitigate the overhead caused by remote memory.
dc.description.sponsorshipThis work has been partially supported by the European Union's Horizon 2020 research and innovation programme under grant agreement No 687632 (dReDBox Project).
dc.format.extent8 p.
dc.language.isoeng
dc.publisherIEEE
dc.subjectÀrees temàtiques de la UPC::Informàtica
dc.subject.lcshHigh performance computing
dc.subject.otherData centers
dc.subject.otherOptical switches
dc.subject.otherServers
dc.subject.otherData analysis
dc.subject.otherHardware
dc.subject.otherMemory management
dc.titleDiluting the Scalability Boundaries: Exploring the Use of Disaggregated Architectures for High-Level Network Data Analysis
dc.typeConference lecture
dc.subject.lemacSupercomputadors
dc.identifier.doi10.1109/HPCC-SmartCity-DSS.2017.45
dc.description.peerreviewedPeer Reviewed
dc.relation.publisherversionhttp://ieeexplore.ieee.org/document/8291948/
dc.rights.accessOpen Access
dc.description.versionPostprint (author's final draft)
dc.relation.projectidinfo:eu-repo/grantAgreement/EC/H2020/687632/EU/Disaggregated Recursive Datacentre-in-a-Box/dReDBox
local.citation.publicationNameHigh Performance Computing and Communications; IEEE 15th International Conference on Smart City; IEEE 3rd International Conference on Data Science and Systems (HPCC/SmartCity/DSS), 2017 IEEE 19th International Conference on
local.citation.startingPage340
local.citation.endingPage347


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record