[Beowulf] MS HPC... Oh dear...

John Vert jvert at windows.microsoft.com
Wed Jun 14 11:52:15 PDT 2006


We will have some real benchmarks announced over the next few months.
Microbenchmarks, industry benchmarks, and application benchmarks. I am
not going to throw out some numbers right here because I don't have all
the details yet and some of the driver stacks are still being tuned. But
our testing so far shows MPI latency numbers comparable to the best
Linux numbers on the same hardware.

I know more than one customer who has ported their application from
sockets to MPI simply because the MPI stack talked directly to the
hardware from usermode and therefore delivered better latency than IP
emulation done by a kernel driver. Some IB vendors provide SDP support
on Linux which should be roughly equivalent. I do not know how trivial
setting this up is for your average *ix person or how the latency of SDP
compares to a tuned MPI. I'd be interested to hear about it if anyone
has practical experience.

John Vert
Development Manager 
Windows High Performance Computing

> -----Original Message-----
> From: Mark Hahn [mailto:hahn at physics.mcmaster.ca]
> Sent: Tuesday, June 13, 2006 6:22 PM
> To: John Vert
> Cc: beowulf at beowulf.org
> Subject: RE: [Beowulf] MS HPC... Oh dear...
> 
> > The high-speed interconnects plug into our MPI stack through Winsock
> > Direct. This enables low-latency usermode I/O at the sockets level.
> > Any
> 
> how low-latency?  anyone who cares about latency needs to know the
> numbers, and especially versus linux on the same hardware.
> 
> > application that uses sockets will benefit from the high speed
> > interconnect without relinking or recompiling.
> 
> is this trivial (to a *ix person) or am I missing the point?
> most interconnects provide IP emulation and thus by definition work as
> you describe, no?  even the converse (use various interconnects
without
> recompiling/linking) is also done pretty commonly.
> 





More information about the Beowulf mailing list