Dual Athlon MP 1U units
Robert G. Brown
rgb at phy.duke.edu
Fri Jan 25 19:17:48 PST 2002
On Fri, 25 Jan 2002, Steven Timm wrote:
>
> I am just wondering how many people have managed to get a
> cluster of dual Athlon-MP nodes up and running. If so,
> which motherboards and chipsets are you using, and has anyone
> safely done this in a 1U form factor?
We're waiting on a cluster room renovation to have our full cluster
built, but we've brought up individual 2U dual nodes based on the Tyan
Tiger. We've encountered a few minor problems -- the network card
inexplicably but consistently wouldn't work in the first slot of the
riser so we had to swap it with a video card (probably unnecessary in
production but useful for assembly and debugging). We had to reflash
the 3C905s to get them to PXEboot correctly. A few other flakes.
However, once you get everything hammered out, one can PXEboot straight
into a kickstart install and really zip (about 5 minutes for a full
install of a 7.2 "cluster node" kickstart configuration over 100BT), and
they seem to work well enough in the limited tests we've run with only a
few nodes up.
Having messed inside these 2U cases, I personally would not really
recommend 1U duals, even though there are definitely vendors who will
sell them. 2U gives you three riser slots, which is useful. 2U gives
you room for a whole bunch of cooling fans (our cases have several and
we might install still more if we have thermal problems). 2U isn't
exactly >>roomy<< for these motherboards -- 1U would be downright
crowded, and I'd be very worried about heat when all nodes are really
cranking in a stack.
If you like, I'll give an update when we have them racked up. The room
is nearly finished but still needs the racks to be bolted to the floor,
security locks, an X10 or two for remote video monitoring, and we're
still trying to dicker a thermal kill for the master power panels
(anybody have recommendations or comments?). Vendor recommendations for
telco-type patch panels that permit whole bundles of cat5 to be shipped
around at once are also welcome. With luck we might be done in two
weeks.
rgb
>
> Thanks
>
> Steve Timm
>
>
> ------------------------------------------------------------------
> Steven C. Timm (630) 840-8525 timm at fnal.gov http://home.fnal.gov/~timm/
> Fermilab Computing Division/Operating Systems Support
> Scientific Computing Support Group--Computing Farms Operations
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
>
--
Robert G. Brown http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb at phy.duke.edu
More information about the Beowulf
mailing list