Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Decoupled Strategy for Imbalanced Workloads in MapReduce Frameworks
KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).ORCID iD: 0000-0003-0639-0639
KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for High Performance Computing, PDC.ORCID iD: 0000-0002-9901-9857
Show others and affiliations
2019 (English)In: Proceedings - 20th International Conference on High Performance Computing and Communications, 16th International Conference on Smart City and 4th International Conference on Data Science and Systems, HPCC/SmartCity/DSS 2018, Institute of Electrical and Electronics Engineers (IEEE), 2019, p. 921-927Conference paper, Published paper (Refereed)
Abstract [en]

In this work, we consider the integration of MPI one-sided communication and non-blocking I/O in HPC-centric MapReduce frameworks. Using a decoupled strategy, we aim to overlap the Map and Reduce phases of the algorithm by allowing processes to communicate and synchronize using solely one-sided operations. Hence, we effectively increase the performance in situations where the workload per process becomes unexpectedly unbalanced. Using a Word-Count implementation and a large dataset from the Purdue MapReduce Benchmarks Suite (PUMA), we demonstrate that our approach can provide up to 23% performance improvement on average compared to a reference MapReduce implementation that uses state-of-the-art MPI collective communication and I/O.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2019. p. 921-927
Keywords [en]
High Performance Computing, MapReduce, MPI One Sided Communication
National Category
Computer Engineering
Identifiers
URN: urn:nbn:se:kth:diva-246358DOI: 10.1109/HPCC/SmartCity/DSS.2018.00153ISI: 000468511200121Scopus ID: 2-s2.0-85062487109ISBN: 9781538666142 (print)OAI: oai:DiVA.org:kth-246358DiVA, id: diva2:1297111
Conference
20th International Conference on High Performance Computing and Communications, 16th IEEE International Conference on Smart City and 4th IEEE International Conference on Data Science and Systems, HPCC/SmartCity/DSS 2018, 28 June 2018 through 30 June 2018
Note

QC 20190319

Available from: 2019-03-19 Created: 2019-03-19 Last updated: 2019-11-01Bibliographically approved
In thesis
1. High-Performance I/O Programming Models for Exascale Computing
Open this publication in new window or tab >>High-Performance I/O Programming Models for Exascale Computing
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The success of the exascale supercomputer is largely dependent on novel breakthroughs that overcome the increasing demands for high-performance I/O on HPC. Scientists are aggressively taking advantage of the available compute power of petascale supercomputers to run larger scale and higher-fidelity simulations. At the same time, data-intensive workloads have recently become dominant as well. Such use-cases inherently pose additional stress into the I/O subsystem, mostly due to the elevated number of I/O transactions.

As a consequence, three critical challenges arise that are of paramount importance at exascale. First, while the concurrency of next-generation supercomputers is expected to increase up to 1000x, the bandwidth and access latency of the I/O subsystem is projected to remain roughly constant in comparison. Storage is, therefore, on the verge of becoming a serious bottleneck. Second, despite upcoming supercomputers expected to integrate emerging non-volatile memory technologies to compensate for some of these limitations, existing programming models and interfaces (e.g., MPI-IO) might not provide any clear technical advantage when targeting distributed intra-node storage, let alone byte-addressable persistent memories. And third, even though compute nodes becoming heterogeneous can provide benefits in terms of performance and thermal dissipation, this technological transformation implicitly increases the programming complexity. Hence, making it difficult for scientific applications to take advantage of these developments.

In this thesis, we explore how programming models and interfaces must evolve to address the aforementioned challenges. We present MPI storage windows, a novel concept that proposes utilizing the MPI one-sided communication model and MPI windows as a unified interface to program memory and storage. We then demonstrate how MPI one-sided can provide benefits on data analytics frameworks following a decoupled strategy, while integrating seamless fault-tolerance and out-of-core execution. Furthermore, we introduce persistent coarrays to enable transparent resiliency in Coarray Fortran, supporting the "failed images" feature recently introduced into the standard. Finally, we propose a global memory abstraction layer, inspired by the memory-mapped I/O mechanism of the OS, to expose different storage technologies using conventional memory operations.

The outcomes from these contributions are expected to have a considerable impact in a wide-variety of scientific applications on HPC, both in current and next-generation supercomputers.

Abstract [sv]

Framgången för superdatorer på exaskala kommer till stor del bero på nya genombrott som tillmötesgår ökande krav på högpresterande I/O inom högprestandaberäkningar. Forskare utnyttjar idag tillgänglig datorkraft hos superdatorer på petaskala för att köra större simuleringar med högre fidelitet. Samtidigt har dataintensiva tillämpningar blivit vanliga. Dessa skapar ytterligare påfrestningar på I/O subsystemet, framförallt genom det större antalet I/O transaktioner. 

Följdaktligen uppstår flera kritiska utmaningar som är av största vikt vid beräkningar på exaskala. Medan samtidigheten hos nästa generationens superdatorer förväntas öka med uppemot tre storleksordningar så beräknas bandvidden och accesslatensen hos I/O subsystemet förbli relativt oförändrad. Lagring står därför på gränsen till att bli en allvarlig flaskhals. Kommande superdatorer förväntas innehålla nya icke-flyktiga minnesteknologier för att kompensera för dessa begränsningar, men existerande programmeringsmodeller och gränssnitt (t.ex. MPI-IO) kommer eventuellt inte att ge några tydliga tekniska fördelar när de tillämpas på distribuerad intra-nod lagring, särskilt inte för byte-addresserbara persistenta minnen. Även om ökande heterogenitet hos beräkningsnoder kommer kunna ge fördelar med avseende på prestanda och termisk dissipation så kommer denna teknologiska transformation att medföra en ökning av programmeringskomplexitet, vilket kommer att försvåra för vetenskapliga tillämpningar att dra nytta av utvecklingen.

I denna avhandling utforskas hur programmeringsmodeller och gränssnitt behöver vidareutvecklas för att kringgå de ovannämnda begränsningarna. MPI lagringsfönster kommer presenteras, vilket är ett nytt koncept som går ut på att använda den ensidiga MPI kommunikationsmodellen tillsammans med MPI fönster som ett enhetligt gränssnitt till programminne och lagring. Därefter demonstreras hur ensidig MPI kommunikation kan vara till gagn för dataanalyssystem genom en frikopplad strategi, samtidigt som den integrerar kontinuerlig feltolerans och exekvering utanför kärnan. Vidare introduceras persistenta coarrays för att möjliggöra transparant resiliens i Coarray Fortran, som stödjer “failed images” funktionen som nyligen införts i standarden. Slutligen föreslås ett globalt minnesabstraktionslager, som med inspiration av minnes-kartlagda I/O mekanismen hos operativsystemet exponerar olika lagringsteknologier med hjälp av konventionella minnesoperationer.

Resultaten från dessa bidrag förväntas ha betydande påverkan för högprestandaberäkningar inom flera vetenskapliga tillämpningsområden, både för existerande och nästa generationens superdatorer.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2019. p. 135
Series
TRITA-EECS-AVL ; 2019:77
National Category
Computer Engineering
Identifiers
urn:nbn:se:kth:diva-263196 (URN)978-91-7873-344-6 (ISBN)
Public defence
2019-11-29, B1, Brinellvägen 23, Bergs, våningsplan 1, KTH Campus, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

QC 20191105

Available from: 2019-11-05 Created: 2019-11-01 Last updated: 2019-11-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Markidis, StefanoLaure, Erwin

Search in DiVA

By author/editor
Rivas Gomez, SergioMarkidis, StefanoLaure, Erwin
By organisation
Computational Science and Technology (CST)Centre for High Performance Computing, PDC
Computer Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 135 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf