[Beowulf] recommendation on crash cart for a cluster room: full cluster KVM is not an option I suppose?

Gerry Creager gerry.creager at tamu.edu
Wed Sep 30 06:53:01 PDT 2009


Hearns, John wrote:
> I like the shared socket approach. Building a separate IPMI network
> seems a lot of extra wiring to me. Admittedly the IPMI switches can be
> configured to be dirt cheap but it still feels like building a extra
> tiny road for one car a day when a huge highway with spare capacity
> exists right next door carrying thousands of cars. (Ok, cheesy
> analogy!)
> 
> 
> Errrr....  you missed all my Beowulf posts about the clashes with the
> IPMI ports
> and the ports used for 'rsh' connections on a cluster then? And all the
> shenanigans
> with setting sunrpc.min_resvport etc.?
> 
> Having a separate, simple IPMI network which comes up when you power the
> racks up
> has a lot of advantages. 10/100 Netgear switches cost almost nothing,
> and getting
> another loom of Cat5 cables configured when the racks are being built is
> relatively easy.
> 
> By the way, which hardware do you use?


We've been down both paths. On our recent acquisition, we ended up with 
separate, dedicated IPMI ports, despite our spec stating we wanted 
shared socked ports.  I bought 4 Netgear switches and added 
infrastructure cabling. Having been down both paths, now, in the last 
year (nothing is too old to have the memory clear in my mind) I 
definitely have decided the completely separate IPMI network plan is 
superior overall.  I wish I could retrofit the Dell cluster to 
accomplish this, but it ain't gonna happen.

It's a much cleaner (from a cluster management view) approach, IMNSHO.

gerry



More information about the Beowulf mailing list