[Beowulf] MVAPICH2 and osu_latency

Tom Elken tom.elken at qlogic.com
Thu Jun 12 15:04:29 PDT 2008


So you're concerned with the gap between the 2.63 us that OSU measured
and your 3.07 us you measured.  I wouldn't be too concerned.
 
MPI latency can be quite dependent on the systems you use.  OSU used
dual-processor 2.8 Ghz processors.  Such as system has ~60 ns latency to
local memory.  On your 4-socket Opteron system, your local memory
latency is probably in the 90-100 ns range.  
 
Assuming you are also using MVAPICH2, this is probably the main
difference for the latency shortfall you are seeing.
 
Another possibility is that the CPU you are running the MPI test on is
not the closest CPU to the PCIe chipset.  Thus, you may be taking some
HT hops on the way to the PCIe bus and adapter card.
 
-Tom


________________________________

	From: beowulf-bounces at beowulf.org
[mailto:beowulf-bounces at beowulf.org] On Behalf Of Jan Heichler
	Sent: Thursday, June 12, 2008 2:28 PM
	To: Beowulf Mailing List
	Subject: [Beowulf] MVAPICH2 and osu_latency
	
	

	Dear all!

	
	

	
	

	I found this
http://mvapich.cse.ohio-state.edu/performance/mvapich2/opteron/MVAPICH2-
opteron-gen2-DDR.shtml as reference value for MPI-latency of Infiniband.
I try to reproduce those numbers at the moment but i'm stuck with

	
	

	# OSU MPI Latency Test v3.0

	# Size            Latency (us)

	0                         3.07

	1                         3.17

	2                         3.16

	4                         3.15

	8                         3.19

	
	

	Equipment is two quadsocket Opteron Blades (Supermicro) with
Mellanox Ex DDR cards. Single 24 port switch connects them.

	
	

	Can anybody help with suggestions what i can do to lower the
latency? 

	  

	
	

	Regards, Jan                          

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20080612/16bf3067/attachment.html>


More information about the Beowulf mailing list