Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
On the strong scaling of the spectral element solver Nek5000 on petascale systems
KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Mechanics.
KTH, School of Computer Science and Communication (CSC), Centres, Centre for High Performance Computing, PDC.ORCID iD: 0000-0002-3859-9480
Show others and affiliations
2016 (English)In: ACM International Conference Proceeding Series, Association for Computing Machinery , 2016Conference paper (Refereed)
Abstract [en]

The present work is targeted at performing a strong scaling study of the high-order spectral element uid dynamics solver Nek5000. Prior studies such as [5] indicated a recommendable metric for strong scalability from a theoretical viewpoint, which we test here extensively on three parallel machines with different performance characteristics and interconnect networks, namely Mira (IBM Blue Gene/Q), Beskow (Cray XC40) and Titan (Cray XK7). The test cases considered for the simulations correspond to a turbulent ow in a straight pipe at four different friction Reynolds numbers Reτ = 180, 360, 550 and 1000. Considering the linear model for parallel communication we quantify the machine characteristics in order to better assess the scaling behaviors of the code. Subsequently sampling and profiling tools are used to measure the computation and communication times over a large range of compute cores. We also study the effect of the two coarse grid solvers XXT and AMG on the computational time. Super-linear scaling due to a reduction in cache misses is observed on each computer. The strong scaling limit is attained for roughly 5000 - 10; 000 degrees of freedom per core on Mira, 30; 000 - 50; 0000 on Beskow, with only a small impact of the problem size for both machines, and ranges between 10; 000 and 220; 000 depending on the problem size on Titan. This work aims at being a reference for Nek5000 users and also serves as a basis for potential issues to address as the community heads towards exascale supercomputers.

Place, publisher, year, edition, pages
Association for Computing Machinery , 2016.
Keyword [en]
Benchmarking, Computational fluid dynamics, Nek5000, Scaling, Degrees of freedom (mechanics), Reynolds number, Supercomputers, Computational time, Interconnect networks, Parallel communication, Parallel machine, Performance characteristics, Spectral element, Application programs
National Category
Mechanical Engineering
Identifiers
URN: urn:nbn:se:kth:diva-207506DOI: 10.1145/2938615.2938617ScopusID: 2-s2.0-85014776002OAI: oai:DiVA.org:kth-207506DiVA: diva2:1106128
Conference
2016 Exascale Applications and Software Conference, EASC 2016, 25 April 2016 through 29 April 2016
Note

Conference code: 123835; Export Date: 22 May 2017; Conference Paper; Funding details: DOE, U.S. Department of Energy; Funding text: This research used resources provided by the Swedish National Infrastructure for Computing (SNIC) at PDC Centre for High Performance Computing (PDC-HPC). This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. We thank Scott Parker and Kevin Harms from ALCF at Argonne for their invaluable suggestions and insights into performance analysis on Mira. We also thank the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program and the Linné FLOW Center that funded part of this research. QC 20170607

Available from: 2017-06-07 Created: 2017-06-07 Last updated: 2017-06-07Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Offermans, N.Gong, JingSchlatter, Philipp
By organisation
Linné Flow Center, FLOWMechanicsCentre for High Performance Computing, PDC
Mechanical Engineering

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 4 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf