[Beowulf] building a new cluster
Jeff Layton
jeffrey.b.layton at lmco.com
Wed Sep 1 11:41:45 PDT 2004
Robert G. Brown wrote:
>On Wed, 1 Sep 2004, Alvin Oga wrote:
>
>
>>3-yr no questions asked servie contracts is tough ... must be good stuff
>>they're pushing and the buyers know what they're getting
>>
>>
>
>Well, with around 25 systems x 6 months of operation x near 100% duty
>cycle, we have zero failures so far. So who knows if it is really "no
>questions" support? So far they are "no failure systems", which works
>even better for me.
>
>
>>>Your cluster is at the boundary of what one "can" build using shelves
>>>and towers units. If you plan to ever expand, or if space is an issue,
>>>or if you just want a neat look, you might go with rackmount systems, in
>>>which case the whole cluster would probably fit into a single rack.
>>>
>>>
>>always best to use rackmounts ... looks fancier :-)
>>
>>
>
>...and often costs a couple of hundred extra dollars/node. You pay for
>the fancy looks. Over 40 systems that can cost you 3-4 systems -- you
>have to view it as spending nodes for the rackmount.
>
>Although in a way I agree. There are some attractive advantages in
>rackmount that might well justify it in spite of the cost. In any sort
>of machine room context, rackmount is more or less a requirement. If
>strong growth (more nodes) is expected, it probably should be a
>requirement. Racks keep things neater and are easier in human terms to
>install and maintain. And sometimes even the fancier looks (as Alvin
>says:-) can have value, if you are trying to "sell" your cluster to a
>granting agency or officer on a site visit.
>
>It's a bit more "professional", for all that nearly all the original
>"professional" beowulf clusters were shelf units and towers...
>
Using regular desktop cases can also give you an advantage (something
that
Tim Mattox alluded to). You will be able to use all of your PCI slots so you
could try an FNN network or a Hypercube or Torus network very easily
(much more difficult to do with a single, or at best two, riser cards).
You can even take this a step further and put a nice graphics card in
each
desktop case and then experiment with using the GPU's for running code
using BrookGPU (this is one of my secret desires to run one MPI code using
the GPU's on the nodes and one using the CPU's on the node :) ).
Jeff
> rgb
>
>
>>c ya
>>alvin
>>
>>
>
>Robert G. Brown http://www.phy.duke.edu/~rgb/
>Duke University Dept. of Physics, Box 90305
>Durham, N.C. 27708-0305
>Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb at phy.duke.edu
>
>
>
--
Dr. Jeff Layton
Aerodynamics and CFD
Lockheed-Martin Aeronautical Company - Marietta
More information about the Beowulf
mailing list