dc.contributor | Becerra Fontal, Yolanda |
dc.contributor | Carrera Pérez, David |
dc.contributor.author | Polo Bardés, Jordà |
dc.contributor.other | Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors |
dc.date.accessioned | 2014-07-31T11:24:29Z |
dc.date.available | 2014-07-31T11:24:29Z |
dc.date.issued | 2014-07-15 |
dc.identifier.citation | Polo Bardés, J. Multi-constraint scheduling of MapReduce workloads. Tesi doctoral, UPC, Departament d'Arquitectura de Computadors, 2014. DOI 10.5821/dissertation-2117-95387 . |
dc.identifier.uri | http://hdl.handle.net/2117/95387 |
dc.description.abstract | In recent years there has been an extraordinary growth of large-scale data processing and related technologies in both, industry and academic communities. This trend is mostly driven by the need to explore the increasingly large amounts of information that global companies and communities are able to gather, and has lead the introduction of new tools and models, most of which are designed around the idea of handling huge amounts of data.
A good example of this trend towards improved large-scale data processing is MapReduce, a programming model intended to ease the development of massively parallel applications, and which has been widely adopted to process large datasets thanks to its simplicity. While the MapReduce model was originally used primarily for batch data processing in large static clusters, nowadays it is mostly deployed along with other kinds of workloads in shared environments in which multiple users may be submitting concurrent jobs with completely different priorities and needs: from small, almost interactive, executions, to very long applications that take hours to complete. Scheduling and selecting tasks for execution is extremely relevant in MapReduce environments since it governs a job's opportunity to make progress and determines its performance. However, only basic primitives to prioritize between jobs are available at the moment, constantly causing either under or over-provisioning, as the amount of resources needed to complete a particular job are not obvious a priori.
This thesis aims to address both, the lack of management capabilities and the increased complexity of the environments in which MapReduce is executed. To that end, new models and techniques are introduced in order to improve the scheduling of MapReduce in the presence of different constraints found in real-world scenarios, such as completion time goals, data locality, hardware heterogeneity, or availability of resources. The focus is on improving the integration of MapReduce with the computing infrastructures in which it usually runs, allowing alternative techniques for dynamic management and provisioning of resources. More specifically, it is focused in three scenarios that are incremental in its scope. First, it studies the prospects of using high-level performance criteria to manage and drive the performance of MapReduce applications, taking advantage of the fact that MapReduce is executed in controlled environments in which the status of the cluster is known. Second, it examines the feasibility and benefits of making the MapReduce runtime more aware of the underlying hardware and the characteristics of applications. And finally, it also considers the interaction between MapReduce and other kinds of workloads, proposing new techniques to handle these increasingly complex environments.
Following these three items described above, this thesis contributes to the management of MapReduce workloads by 1) proposing a performance model for MapReduce workloads and a scheduling algorithm that leverages the proposed model and is able to adapt depending on the various needs of its users in the presence of completion time constraints; 2) proposing a new resource model for MapReduce and a placement algorithm aware of the underlying hardware as well as the characteristics of the applications, capable of improving cluster utilization while still being guided by job performance metrics; and 3) proposing a model for shared environments in which MapReduce is executed along with other kinds of workloads such as transactional applications, and a scheduler aware of these workloads and its expected demand of resources, capable of improving resource utilization across machines while observing completion time goals. |
dc.format.extent | 159 p. |
dc.language.iso | eng |
dc.publisher | Universitat Politècnica de Catalunya |
dc.rights | L'accés als continguts d'aquesta tesi queda condicionat a l'acceptació de les condicions d'ús establertes per la següent llicència Creative Commons: http://creativecommons.org/licenses/by-nc-sa/3.0/es/ |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/3.0/es/ |
dc.source | TDX (Tesis Doctorals en Xarxa) |
dc.subject | Àrees temàtiques de la UPC::Informàtica |
dc.title | Multi-constraint scheduling of MapReduce workloads |
dc.type | Doctoral thesis |
dc.subject.lemac | Algorismes |
dc.subject.lemac | Ordinadors paral·lels |
dc.identifier.doi | 10.5821/dissertation-2117-95387 |
dc.identifier.dl | B 19968-2014 |
dc.rights.access | Open Access |
dc.description.version | Postprint (published version) |
dc.identifier.tdx | http://hdl.handle.net/10803/276174 |