Change search
ReferencesLink to record
Permanent link

Direct link
MapReduce: Limitations, optimizations and open issues
KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.ORCID iD: 0000-0001-8219-4862
KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
2013 (English)In: Proceedings - 12th IEEE International Conference on Trust, Security and Privacy in Computing and Communications, TrustCom 2013, IEEE , 2013, 1031-1038 p.Conference paper (Refereed)
Abstract [en]

MapReduce has recently gained great popularity as a programming model for processing and analyzing massive data sets and is extensively used by academia and industry. Several implementations of the MapReduce model have emerged, the Apache Hadoop framework being the most widely adopted. Hadoop offers various utilities, such as a distributed file system, job scheduling and resource management capabilities and a Java API for writing applications. Hadoop's success has intrigued research interest and has led to various modifications and extensions to the framework. Implemented optimizations include performance improvements, programming model extensions, tuning automation and usability enhancements. In this paper, we discuss the current state of the Hadoop framework and its identified limitations. We present, compare and classify Hadoop/MapReduce variations, identify trends, open issues and possible future directions.

Place, publisher, year, edition, pages
IEEE , 2013. 1031-1038 p.
, IEEE International Conference on Trust Security and Privacy in Computing and Communications, ISSN 2324-898X
Keyword [en]
Big Data, MapReduce, Survey
National Category
Information Systems
URN: urn:nbn:se:kth:diva-143846DOI: 10.1109/TrustCom.2013.126ISI: 000332856700131ScopusID: 2-s2.0-84893439928ISBN: 978-076955022-0OAI: diva2:712390
12th IEEE International Conference on Trust, Security and Privacy in Computing and Communications, TrustCom 2013; Melbourne, VIC; Australia; 16 July 2013 through 18 July 2013

QC 20140415

Available from: 2014-04-15 Created: 2014-03-31 Last updated: 2014-06-05Bibliographically approved
In thesis
1. Performance Optimization Techniques and Tools for Data-Intensive Computation Platforms: An Overview of Performance Limitations in Big Data Systems and Proposed Optimizations
Open this publication in new window or tab >>Performance Optimization Techniques and Tools for Data-Intensive Computation Platforms: An Overview of Performance Limitations in Big Data Systems and Proposed Optimizations
2014 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Big data processing has recently gained a lot of attention both from academia and industry. The term refers to tools, methods, techniques and frameworks built to collect, store, process and analyze massive amounts of data. Big data can be structured, unstructured or semi-structured. Data is generated from various different sources and can arrive in the system at various rates. In order to process these large amounts of heterogeneous data in an inexpensive and efficient way, massive parallelism is often used. The common architecture of a big data processing system consists of a shared-nothing cluster of commodity machines. However, even in such a highly parallel setting, processing is often very time-consuming. Applications may take up to hours or even days to produce useful results, making interactive analysis and debugging cumbersome.

One of the main problems is that good performance requires both good data locality and good resource utilization. A characteristic of big data analytics is that the amount of data that is processed is typically large in comparison with the amount of computation done on it. In this case, processing can benefit from data locality, which can be achieved by moving the computation close the to data, rather than vice versa. Good utilization of resources means that the data processing is done with maximal parallelization. Both locality and resource utilization are aspects of the programming framework’s runtime system. Requiring the programmer to work explicitly with parallel process creation and process placement is not desirable. Thus, specifying good optimization that would relieve the programmer from low-level, error-prone instrumentation to achieve good performance is essential.

The main goal of this thesis is to study, design and implement performance optimizations for big data frameworks. This work contributes methods and techniques to build tools for easy and efficient processing of very large data sets. It describes ways to make systems faster, by inventing ways to shorten job completion times. Another major goal is to facilitate the application development in distributed data-intensive computation platforms and make big-data analytics accessible to non-experts, so that users with limited programming experience can benefit from analyzing enormous datasets.

The thesis provides results from a study of existing optimizations in MapReduce and Hadoop related systems. The study presents a comparison and classification of existing systems, based on their main contribution. It then summarizes the current state of the research field and identifies trends and open issues, while also providing our vision on future directions.

Next, this thesis presents a set of performance optimization techniques and corresponding tools fordata-intensive computing platforms;

PonIC, a project that ports the high-level dataflow framework Pig, on top of the data-parallel computing framework Stratosphere. The results of this work show that Pig can highly benefit from using Stratosphereas the backend system and gain performance, without any loss of expressiveness. The work also identifies the features of Pig that negatively impact execution time and presents a way of integrating Pig with different backends.

HOP-S, a system that uses in-memory random sampling to return approximate, yet accurate query answers. It uses a simple, yet efficient random sampling technique implementation, which significantly improves the accuracy of online aggregation.

An optimization that exploits computation redundancy in analysis programs and m2r2, a system that stores intermediate results and uses plan matching and rewriting in order to reuse results in future queries. Our prototype on top of the Pig framework demonstrates significantly reduced query response times.

Finally, an optimization framework for iterative fixed points, which exploits asymmetry in large-scale graph analysis. The framework uses a mathematical model to explain several optimizations and to formally specify the conditions under which, optimized iterative algorithms are equivalent to the general solution.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2014. 37 p.
TRITA-ICT-ECS AVH, ISSN 1653-6363 ; 14:11
performance optimization, data-intensive computing, big data
National Category
Engineering and Technology
Research subject
Information and Communication Technology
urn:nbn:se:kth:diva-145329 (URN)978-91-7595-143-0 (ISBN)
2014-06-11, Sal D, KTH - ICT, Isafjordsgatan 39, Kista, 10:00 (English)

QC 20140605

Available from: 2014-06-05 Created: 2014-05-16 Last updated: 2014-06-05Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Kalavri, VasilikiVlassov, Vladimir
By organisation
Software and Computer systems, SCS
Information Systems

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 835 hits
ReferencesLink to record
Permanent link

Direct link