[Beowulf] new release of GAMMA: x86-64 + PRO/1000
Tim Mattox
tmattox at gmail.com
Sat Feb 4 14:57:28 PST 2006
Hello,
I have been hoping to find the time to see what it would take to use both
the GAMMA and my FNN stuff together. Unfortunately, I've not been able
to muster up that free time... As I alluded to in a previous post, I should
have my FNN runtime support software available for public consumption
sometime this summer... after my Ph.D. defense. Basically this FNN runtime
support looks like an ethernet channel bond, but with the FNN driver knowing
that depending on the destination of a packet, it can only use particular NIC(s)
to get there. There is more to it than that, but hey, that's why I'm writing a
dissertation on it plus some other FNN stuff. ;-)
Of course, the runtime support would have to be done differently for
FNN+GAMMA, but I don't know the details on it right now. From what
I remember looking through the GAMMA docs+source code a few years
ago, it doubt it would be trivial. :-(
As for a 72 node cluster, a Universal FNN with 48-port switches would
need 3 switches, and two NICs per node. That FNN looks like a triangle,
and is pretty easy to visualize the wiring pattern. Our handy FNN design
CGI is here for those who haven't seen this before: http://aggregate.org/FNN/
BTW- Anyone have good or bad reports on GigE 48-port switch performance?
A colleague has been having some mixed results with a D-Link DGS-1248T
on an Athlon64 cluster (NForce4 chipset with forcedeth ethernet driver).
Not enough trouble to warrant a full blown performance investigation, but
sometimes things don't seem to go as fast as expected. For
now they are saying "good enough" and just using the cluster.
> > Does it still make sense to have a low-latency communication library for
> > Gigabit Ethernet?
Definitely! As others have posted, it would be cool if there was some
form of GAMMA-lite that would work without needing a custom ethernet driver
for each kind of NIC. The e1000 is very common, but you don't tend to find
Intel NIC parts built into motherboards for AMD Athlon64s or Opterons. ;-)
Well, time to get back to my cave to continue writing...
On 2/4/06, Mark Hahn <hahn at physics.mcmaster.ca> wrote:
> > GAMMA only supports the Intel PRO/1000 Gigabit Ethernet NIC (e1000 driver).
>
> well, that's the sticking point, isn't it? is there any way that GAMMA
> could be converted to use the netpoll interface? for instance, look at
> drivers/net/netconsole.c which is, admittedly, much less ambitious
> than supporting MPI.
>
> > Latency at MPI level is below 12 usec, switch included (6.5 usec back-to-back).
>
> it would be interesting to know the latency of various GE switches -
> I believe quite a number of them now brag 1-2 us latency.
>
> > Does it still make sense to have a low-latency communication library for
> > Gigabit Ethernet?
>
> I certainly think so, since IMO not much has changed in some sectors of
> the cluster-config space. AFAIKT, per-port prices for IB (incl cable+switch)
> have not come down anywhere near GB, or even GB prices from ~3 years ago,
> when it was still slightly early-adopter.
>
> my main question is: what's the right design? I've browsed the gamma
> patches a couple times, and they seem very invasive and nic-specific.
> is there really no way to avoid this? for instance, where does the
> latency benefit come from - avoiding the softint and/or stack overhead,
> or the use of a dedicated trap, or copy-avoidance?
>
> further, would Van Jacobson's "channels" concept help out here?
> http://www.lemis.com/grog/Documentation/vj/lca06vj.pdf
>
> channels are a "get the kernel out of the way" approach, which I think
> makes huge amounts of sense. in a way, InfiniPath (certainly the most
> interesting thing to happen to clusters in years!) is a related effort,
> since it specifically avoids the baroqueness of IB kernel drivers.
>
> the slides above basically provide a way for a user-level TCP library
> to register hooks (presumably the usual <IP:port>) for near-zero-overhead
> delivery of packets (and some kind of outgoing queue as well). the
> results are quite profound - their test load consumed 77% CPU before,
> and 14% after, as well as improving latency by ~40%.
>
> yes, it's true that if you spend the money, you can get much better
> performance with less effort (for instance, quadrics is just about
> the ultimate throw-money solution, with InfiniPath similar in performance
> but much more cost-effective.)
>
> but gigabit is just so damn cheap! tossing two 48pt switches at 72 $1500
> dual-opt servers in a FNN config and bang, you've got something useful,
> and you don't have to confine yourself to seti at home-levels of coupling.
>
> regards, mark hahn.
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
>
--
Tim Mattox - tmattox at gmail.com
http://homepage.mac.com/tmattox/
I'm a bright... http://www.the-brights.net/
More information about the Beowulf
mailing list