[Beowulf] building a new cluster

Robert G. Brown rgb at phy.duke.edu
Wed Sep 1 07:28:00 PDT 2004


On Wed, 1 Sep 2004, SC Huang wrote:

> Hi,
>  
> I am about to order a new cluster using a $100K grant for running our in-house MPI codes. I am trying to have at least 36-40 (or more, if possible) nodes. The individual node configuration is:
>  
> dual Xeon 2.8 GHz
> 512K L2 cache, 1MB L3 cache, 533 FSB
> 2GB DDR RAM
> gigabit NIC
> 80 GB IDE hard disk
>  
> The network will be based on a gigabit switch. Most vendors I talked to use HP Procurve 2148 or 4148.
>  
> Can anyone comment on the configuration (and the switch) above? Any other comments (e.g. recommeded vendor, etc) are also welcome. 

Only one comment about the compute platform.  I'd strongly urge you to
test and compare dual Opterons before settling on the Xeons.  If
possible, get a loaner box or boxes to run benchmarks (and ideally your
code itself).  Many vendors out there would be happy to at least loan
you account access to a test box.

I say this because in my experience an Opteron at equivalent cost blows
away a Xeon for nearly any application.  Where YMMV, of course, and
caveat emptor, etc.

A vendor that would be happy to sell you either or both is Penguin.  We
have thus far been pleased with our dual Opterons (Altus 1000E) from
Penguin.  However, there are plenty of other tier 1 and 2 vendors out
there, so shop around.

I'd advise at this point that you a) avoid vanilla boxes and get systems
from a reliable vendor; b) get 3 year no-questions-asked service
contracts on all the systems you get when you get them.  Penguin has a
decent plan, so does Dell (although Dell doesn't sell Opterons).  IBM is
a good, if a bit expensive, vendor.

Your cluster is at the boundary of what one "can" build using shelves
and towers units.  If you plan to ever expand, or if space is an issue,
or if you just want a neat look, you might go with rackmount systems, in
which case the whole cluster would probably fit into a single rack.  You
might also look over rackable.com -- they have a very interesting design
that permits you to effectively double the density of cpus in a rack and
lower overall power consumption while using commodity CPUs (with no
compromise in speed, that is).  I'd expect this to be a bit more costly
though and you might end up trading nodes for design features that may
not matter if you have adequate space and cooling.

   rgb

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu






More information about the Beowulf mailing list