[Beowulf] GPFS on Linux (x86)

Mark Hahn hahn at physics.mcmaster.ca
Thu Sep 14 20:19:04 PDT 2006


>> I have only two very wimpy factoids to offer: my 70TB HP SFS
>> disk array (36 SFS20 shelves with 11x 250G SATA each) has had just one bad 
>> disk since it was installed (say, March).
>> so that's one disk in 1.7Mhours, aggregated, actually a lower rate than I 
>> would have expected...
>
> I never see disks fail this way.  Go unplug the array and turn it back
> on.  Tell me how many disk fail then. :-)

hmm, it's true that the disk arrays have only had a few power cycles
(but they did spend their first few months on line power, and they
had 1-2 manual cycles for firmware updates since).  but I thought the 
1536 80G disks in nodes were more interesting, and they've certainly
had more than a few power cycles.

regards, mark hahn.



More information about the Beowulf mailing list