FNN vs GigabitEther & Myrinet

Velocet math at velocet.ca
Thu Oct 18 10:28:32 PDT 2001


[ For those still sleeping, FNN = flat network neighbourhood.
  http://aggregate.org/FNN ]

I've been doing a fair bit of reading on the subject, and the plain ole
english ArsTechnica writeup on KLAT2's use of FNN gave me a few hints
over their webpage on the comparison of gigabit ether bisection bandwidth
and bandwidth per node vs using mere fast ether with FNN instead of 
GbE & GbE switches

A few questions:

(These questions apply to myrinet as well, obviously noting that its
highly optimized and better performance than GbE but also more expensive.)

In the ArsTechnica article, the bandwidth per node of having 3 or 4
fast Ether NICs in each node was said to be 'just as fast' as a single
GbE NIC onboard. Obviously the GbE nic will not operate at 100% efficiency,
which is related to the previous thread I just posted on. 

But how can 3 or 4 or even 5 NICs compare to the bandwidth per node 
of GbE? Even if you can pump out 90% bw out 5 NICs, how can this
compare with getting anything more than 50% out of a GbE NIC?

I also realise that bandwidth isnt the only concern here - there's message
passing latency. Obviously with 3 or 5 NICs you can handle requests from
different FNN network segments in parallel, but they're coming in much slower
(10 times slower?). Also if a node is waiting for a message from that ONE
OTHER NODE that it really needs replies from before it can continue, dont we
stumble into bottlenecks? (I realise this also depends on the behaviour MPI
libraries and computation software and algorithms, but I am wondering if there
are some general guidelines.)

I was also wondering if there arent problems with having 4 or 5 NICs on
a PCI bus - dont you run into shared IRQs which may result in delays on
each node handling incoming traffic? 

Obviously any bandwidth problems with the PCI bus itself will affect a GbE
NIC too. 32bit 66MHz is technically 2Gb/s+ but you dont get that to
the cards direclty Im sure. Can regular PCI even fill a GbE network?
Is the only solution to go with PCI-X (more expensive gear/node)?

------
My  other concern was that some parallelization software addresses hosts
in various different ways and some may not be compatible with FNN. This
is because each network would have machine occupying a different IP network,
and if the software broadcasts "node1 is 10.0.0.1" to all other nodes on
all networks, then we'll have an addressing problem.

Which packages have problems with this, and what are the possible solutions?
(I have a couple in mind, but I gotta knock heads with my tcp stack hacking
friend who works at Redhat for a bit and see what he thinks of them.)

/kc
-- 
Ken Chase, math at velocet.ca  *  Velocet Communications Inc.  *  Toronto, CANADA 





More information about the Beowulf mailing list