uu.seUppsala University Publications
Change search
ReferencesLink to record
Permanent link

Direct link
Speeding up parallel GROMACS on high-latency networks
Uppsala University, Disciplinary Domain of Science and Technology, Biology, Department of Cell and Molecular Biology. (Van der Spoel)
Show others and affiliations
2007 (English)In: Journal of Computational Chemistry, ISSN 0192-8651, E-ISSN 1096-987X, Vol. 28, no 12, 2075-2084 p.Article in journal (Refereed) Published
Abstract [en]

We investigate the parallel scaling of the GROMACS molecular dynamics code on Ethernet Beowulf clusters and what prerequisites are necessary for decent scaling even on such clusters with only limited bandwidth and high latency. GROMACS 3.3 scales well on supercomputers like the IBM p690 (Regatta) and on Linux clusters with a special interconnect like Myrinet or Infiniband. Because of the high single-node performance of GROMACS, however, on the widely used Ethernet switched clusters, the scaling typically breaks down when more than two computer nodes are involved, limiting the absolute speedup that can be gained to about 3 relative to a single-CPU run. With the LAM MPI implementation, the main scaling bottleneck is here identified to be the all-to-all communication which is required every time step. During such an all-to-all communication step, a huge amount of messages floods the network, and as a result many TCP packets are lost. We show that Ethernet flow control prevents network congestion and leads to substantial scaling improvements. For 16 CPUs, e.g., a speedup of 11 has been achieved. However, for more nodes this mechanism also fails. Having optimized an all-to-all routine, which sends the data in an ordered fashion, we show that it is possible to completely prevent packet loss for any number of multi-CPU nodes. Thus, the GROMACS scaling dramatically improves, even for switches that lack flow control. In addition, for the common HP ProCurve 2848 switch we find that for optimum all-to-all performance it is essential how the nodes are connected to the switch's ports. This is also demonstrated for the example of the Car-Parinello MD code.

Place, publisher, year, edition, pages
2007. Vol. 28, no 12, 2075-2084 p.
Keyword [en]
GROMACS parallel molecular dynamics, Car-Parrinello MD, Ethernet flow control, MPI_Alltoall, network congestion
National Category
Chemical Sciences Biological Sciences Computer and Information Science
URN: urn:nbn:se:uu:diva-16712DOI: 10.1002/jcc.20703ISI: 000248108900018PubMedID: 17405124OAI: oai:DiVA.org:uu-16712DiVA: diva2:44483
Available from: 2008-06-03 Created: 2008-06-03 Last updated: 2011-01-31Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textPubMedhttp://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed&cmd=Retrieve&list_uids=17405124&dopt=Citation

Search in DiVA

By author/editor
van der Spoel, David
By organisation
Department of Cell and Molecular Biology
In the same journal
Journal of Computational Chemistry
Chemical SciencesBiological SciencesComputer and Information Science

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 124 hits
ReferencesLink to record
Permanent link

Direct link