Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Efficient Algorithms for Collective Operations with Notified Communication in Shared Windows
KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Parallelldatorcentrum, PDC.
T Syst Solut Res GmbH, D-70563 Stuttgart, Germany..
KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Parallelldatorcentrum, PDC.ORCID-id: 0000-0003-2414-700X
KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Parallelldatorcentrum, PDC.ORCID-id: 0000-0002-9901-9857
Vise andre og tillknytning
2018 (engelsk)Inngår i: PROCEEDINGS OF PAW-ATM18: 2018 IEEE/ACM PARALLEL APPLICATIONS WORKSHOP, ALTERNATIVES TO MPI (PAW-ATM), IEEE , 2018, s. 1-10Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Collective operations are commonly used in various parts of scientific applications. Especially in strong scaling scenarios collective operations can negatively impact the overall applications performance: while the load per rank here decreases with increasing core counts, time spent in e.g. barrier operations will increase logarithmically with the core count. In this article, we develop novel algorithmic solutions for collective operations such as Allreduce and Allgather(V)-by leveraging notified communication in shared windows. To this end, we have developed an extension of GASPI which enables all ranks participating in a shared window to observe the entire notified communication targeted at the window. By exploring benefits of this extension, we deliver high performing implementations of Allreduce and Allgather(V) on Intel and Cray clusters. These implementations clearly achieve 2x-4x performance improvements compared to the best performing MPI implementations for various data distributions.

sted, utgiver, år, opplag, sider
IEEE , 2018. s. 1-10
Emneord [en]
Collectives, Allreduce, Allgather, AllgatherV, MPI, PGAS, GASPI, shared windows, shared notifications
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-249835DOI: 10.1109/PAW-ATM.2018.00006ISI: 000462965600001Scopus ID: 2-s2.0-85063078028OAI: oai:DiVA.org:kth-249835DiVA, id: diva2:1306075
Konferanse
2018 IEEE/ACM PARALLEL APPLICATIONS WORKSHOP, ALTERNATIVES TO MPI (PAW-ATM)
Merknad

QC 20190423

Tilgjengelig fra: 2019-04-23 Laget: 2019-04-23 Sist oppdatert: 2019-04-23bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Personposter BETA

Iakymchuk, RomanLaure, ErwinMarkidis, Stefano

Søk i DiVA

Av forfatter/redaktør
Al Ahad, Muhammed AbdullahIakymchuk, RomanLaure, ErwinMarkidis, Stefano
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 195 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf