[Beowulf] Home beowulf - NIC latencies
Ole W. Saastad
ole at scali.com
Wed Feb 9 04:24:15 PST 2005
Dear all,
this thread reminded us, that we promised to post HPCC numbers
depicting differences between interconnects, not interconnects
and software stacks in combination. The numbers below stems
from a fairly old system (400MHz FSB, PCI-X, etc.) and does
not reflect the absolute performance achievable on modern hardware.
Similar, the NICs used are _not_ the latest and greatest.
The intent is simply to show the effect of different interconnects,
on the four simple (excluding PTRANS etc) communication metrics
measured by HPCC. (see web page http://icl.cs.utk.edu/hpcc/)
Gigabit Eth. SCI Myrinet InfiniBand
Max Ping Pong Latency : 36.32 4.44 8.65 7.36
Min Ping Pong Bandw. : 117.01 121.31 245.31 359.21
Random Ring Bandw. : 37.59 47.70 69.30 18.02
Random Ring Latency : 42.17 8.91 19.02 9.94
Latency in microseconds and bandwidth in MBytes/s. (1e6 bytes/s).
The HPCC version is 0.8 and the very same binary (and Scali MPI
Connect library) is used for all interconnects (change of
interconnect is done by -net tcp|sci|gm0|ib0 on the command
line).
Cluster information :
16 x Dell PowerEdge 2650 2.4 GHz
Dell PowerConnect 5224 GBE switch.
Mellanox HCA
Infinicon InfiniIO 3000
Myrinet 2000
Dolphin SCI 4x4 Torus
Scali MPI Connect version : scampi-3.3.7-2.rhel3
Mellanox IB driver version : thca-linux-3.2-build-024
GM version : 2.0.14
--
Ole W. Saastad, Dr.Scient.
Manager Cluster Expert Center
dir. +47 22 62 89 68
fax. +47 22 62 89 51
mob. +47 93 05 74 87
ole at scali.com
Scali - www.scali.com
High Performance Clustering
More information about the Beowulf
mailing list