Huinalu Linux SuperCluster

Ron Brightwell rbbrigh at
Thu Mar 15 15:38:35 PST 2001

> > number to what we currently have up and running as a parallel machine.
> > There are another 400+ 466 MHz Alphas sitting next those 1024 nodes that
> > will be integrated in the next few weeks.
> My dream... 
> How do you do to get all of these toys at Sandia ? (blackmail some
> politicians ?) 
> If you figure out that you have too many machines, a lot of people would
> be very happy to help you :-)

Actually, the Cplant system software was designed from the beginning to
support a cluster on the order of 10,000 nodes.  The fact that we had fewer
was just a limitation of the budget.  Our need for help is independent of
the number of machines, but comes from the desire to have a more robust
environment and more advanced features.  The large number of machines should
be an enticement for working with/for us, but it isn't the primary reason
we need help. (This probably isn't the right forum for recruiting, but send
your resumes to jobs at if you would like to join us.)

> How many nodes in Cplant these days (total) ?

The total is hard to get at without a breakdown of the different production
and development clusters:

   Alaska       272   500 MHz EV56
   Barrow        96   500 MHz EV56
   Siberia      592   500 MHz EV6
     SON         84   80 466 MHz EV6 + 4 500 MHz EV6
     SRN         24   500 MHz EV6
     Middle    1536   466 MHz EV6
   Iceberg       32   500 MHz EV56
   Icberg2       16   500 MHz EV6
   Asilomar     128   433 MHz EV56
   Carmel       128   500 MHz EV6
   Diablo       256   466 MHz EV6
   ?             32   466 MHz EV6


Antarctica is designed to be switchable like the ASCI/Red machine, so it
has a large middle section that can move between open, unclassified, and
(currently missing) classified "heads".


More information about the Beowulf mailing list