disadvantages of a linux cluster

Guy Coates gmpc at sanger.ac.uk
Tue Nov 12 03:09:48 PST 2002


> Just for the record, how much did this cluster cost?  Or at least, how
> much does a 3U with 24 blades cost?

Can't comment on how much we paid I'm afraid, but list price for a 800Mhz
PIII blade is $1,249 (you have to figure in the cost of the chassis as
well, the price of which does not seem to be on rlx's website).

Performance wise, linpack pulls about ~550 Mflops on a single blade.
(dmesg reports 1592.52 BogoMIPS per CPU). Disk IO isn't great (ide disks)
but there are two of them per blade so we use  RAID-0 which helps. The
network is only 100BaseT which is not good if you run MPI (we don't) but
there are 3 interfaces per card which allows us to run a slightly strange
network topology in order to move large datafiles onto the blades in a
sensible amount of time.

>So you don't seem to be getting a lot more MHz/Watt,

Probably not too surprising, as the CPU and disks are going to be the
major power draw, and they are bog-standard PC parts, same as in every
other Lintel cluster.

> although you can certainly pack more MHz and Watts per U (with the
> accompanying problem in shedding heat from an even higher power density
> than one has in current 2U Intel duals).

Correct. Space was the primary limiting factor in our case and we house
all the blades in 2 19" 42U racks.

it isn't clear that you win, especially when you have to contend with
Amdahl's law (across more processors),

We do win, as our workload is embarrassingly parallel (mostly blast and a
few other non-parallel algorithms). If we needed fast, low latency
interconnects we would have ended up with a different sort of cluster.
(If you believe the press releases from the various blade manufacturers
"next years" products will have myrinet and fibre channel integrated into
blade chassis making them potentially useful for MPI type jobs).

> I don't really know how management of 24 800 MHz blades in 3U compares
> to managing the 8 2400 MHz P4's in 8U that would replace them.

The blades are very easy to manage. There are no user-serviceable parts on
a blade. So if a disk or CPU dies we pull the whole blade and replace with
a new one.  Whether this is a good thing or not depends on how well you
get on with your vendor and the T&Cs of your service agreement:). RLX have
some nifty blade management software which we use to provision OSs, look
at hardware health, get serial consoles etc.

Cheers,

Guy Coates

-- 
Guy Coates,  Informatics System Group
The Wellcome Trust Sanger Institute, Hinxton, Cambridge, CB10 1SA, UK
Tel: +44 (0)1223 834244 ex 7199










More information about the Beowulf mailing list