[Beowulf] How Would You Test Infiniband in New Cluster?
tom.elken at qlogic.com
Tue Nov 17 17:35:48 PST 2009
> On Behalf Of Jon Forrest
> My HCA is a Mellanox Technologies MT25204 [InfiniHost III Lx HCA]
> (rev 20)
> I did the following, with the results shown:
> $ mpirun -np 2 -machinefile hosts ./mpi_nxnlatbw
> [0<->1] 3.67us 1289.409397 (MillionBytes/sec)
> [1<->0] 3.67us 1276.377689 (MillionBytes/sec)
> I also ran this with more nodes but the point-to-point
> times were about the same.
> Does this look right?
For InfiniHost III, these numbers look right, and you are using IB.
You may get somewhat higher bandwidth using OSU MPI Benchmarks or Intel MPI Benchmarks (formerly Pallas) because a fairly modest message size is used by mpi_nxnlatbw's bandwidth test. It is written to get somewhat close to peak bandwidth and best latency and run over a fairly large cluster in a reasonable amount of time. But as a result, the bandwidth test runs so quickly that taking an OS interrupt can skew a few of the results. Before concluding that a link is underperforming based on mpi_nxnlatbw, re-run the test to see if the same link is slow, or use another more comprehensive benchmark like OMB or IMB.
> Based on your numbers, it looks like my
> IB is slower than yours. Because of the strange way the OFED
> was installed, I can't easily run over just ethernet.
> Thanks for your help
> Jon Forrest
> Research Computing Support
> College of Chemistry
> 173 Tan Hall
> University of California Berkeley
> Berkeley, CA
> jlforrest at berkeley.edu
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
> To change your subscription (digest mode or unsubscribe) visit
More information about the Beowulf