Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Software news and update: Speeding up parallel GROMACS on high-latency networks
Max-Planck Institut Göttingen.
Uppsala University.
Max-Planck Institut Göttingen.
Stockholm University.ORCID-id: 0000-0002-2734-2794
Visa övriga samt affilieringar
2007 (Engelska)Ingår i: Journal of Computational Chemistry, ISSN 0192-8651, E-ISSN 1096-987X, Vol. 28, nr 12, s. 2075-84Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

We investigate the parallel scaling of the GROMACS molecular dynamics code on Ethernet Beowulf clusters and what prerequisites are necessary for decent scaling even on such clusters with only limited bandwidth and high latency. GROMACS 3.3 scales well on supercomputers like the IBM p690 (Regatta) and on Linux clusters with a special interconnect like Myrinet or Infiniband. Because of the high single-node performance of GROMACS, however, on the widely used Ethernet switched clusters, the scaling typically breaks down when more than two computer nodes are involved, limiting the absolute speedup that can be gained to about 3 relative to a single-CPU run. With the LAM MPI implementation, the main scaling bottleneck is here identified to be the all-to-all communication which is required every time step. During such an all-to-all communication step, a huge amount of messages floods the network, and as a result many TCP packets are lost. We show that Ethernet flow control prevents network congestion and leads to substantial scaling improvements. For 16 CPUs, e.g., a speedup of 11 has been achieved. However, for more nodes this mechanism also fails. Having optimized an all-to-all routine, which sends the data in an ordered fashion, we show that it is possible to completely prevent packet loss for any number of multi-CPU nodes. Thus, the GROMACS scaling dramatically improves, even for switches that lack flow control. In addition, for the common HP ProCurve 2848 switch we find that for optimum all-to-all performance it is essential how the nodes are connected to the switch's ports. This is also demonstrated for the example of the Car-Parinello MD code.

Ort, förlag, år, upplaga, sidor
2007. Vol. 28, nr 12, s. 2075-84
Nationell ämneskategori
Teoretisk kemi Programvaruteknik
Identifikatorer
URN: urn:nbn:se:kth:diva-82625DOI: 10.1002/jcc.20703ISI: 000248108900018PubMedID: 17405124OAI: oai:DiVA.org:kth-82625DiVA, id: diva2:498446
Anmärkning
QC 20120302Tillgänglig från: 2012-02-12 Skapad: 2012-02-12 Senast uppdaterad: 2018-01-12Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextPubMed

Personposter BETA

Lindahl, Erik

Sök vidare i DiVA

Av författaren/redaktören
Lindahl, Erik
I samma tidskrift
Journal of Computational Chemistry
Teoretisk kemiProgramvaruteknik

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetricpoäng

doi
pubmed
urn-nbn
Totalt: 64 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf