[Beowulf] Building new cluster - estimate
John Hearns
john.hearns at streamline-computing.com
Mon Jul 28 01:16:11 PDT 2008
On Mon, 2008-07-28 at 01:52 -0400, Mark Hahn wrote:
>
> > 2. reasonably fast interconnect (IB SDR 10Gb/s would suffice our
> > computational needs (running LAMMPs molecular dynamics and VASP DFT codes)
> > 3. 48U rack (preferably with good thermal management)
>
> "thermal management"? servers need cold air in front and unobstructed
> exhaust. that means open or mesh front/back (and blanking panels).
>
Agreed. However depending on the location if space is tight you could
think of an APC rack with the heavy fan exhaust door on th rear, and
vent the hot air.
> > - 2x Intel Xeon E5420 Hapertown 2.5 GHz quad core CPU : 2x$350=$700
> > - Dual LGA 771 Intel 5400 Supermicro mb :
> > $430
I'd recommend looking at the Intel Twin motherboard systems for this
project. Leaves plenty of room for cluster head node, and RAID arrays, a
UPS and switches.
Supermicro have these motherboards with onboard Infiniband, so no need
for extra cards.
One thing you have to think about is power density - it is no use
cramming 40 1U systems into a rack plus switches and head nodes - it is
going to draw far too many amps. Think two times APC PDUs per cabinet at
the very maximum. The Intel twins help here again, as they have a high
efficiency PSU and the losses are shared between two systems. I'm not
sure if we would still have to spread this sort of load between two
racks - it depends on the calculations.
You also need to put in some budget for power - importantly - air
conditioning.
> > In principle, we have some experience in building and managing clusters, but
> > with 40 node systems it would make sense to get a good cluster integrator to
> > do the job. Can people share their recent experiences and recommend reliable
> > vendors to deal with?
Our standard build would be an APC rack, IPMI in all compute nodes plus
two networked APC PDUs.
John Hearns
More information about the Beowulf
mailing list