[Beowulf] GPFS on Linux (x86)
Mark Hahn
hahn at physics.mcmaster.ca
Thu Sep 14 16:53:10 PDT 2006
> If someone would be so kind as to help me find *real* data that
> demonstrates higher SATA/IDE failure rates as compared to SCSI, I would
> most appreciate it.
I have only two very wimpy factoids to offer: my 70TB HP SFS
disk array (36 SFS20 shelves with 11x 250G SATA each) has
had just one bad disk since it was installed (say, March).
so that's one disk in 1.7Mhours, aggregated, actually a lower rate
than I would have expected...
that storage is attached to 768 compute nodes, each of which has
2x80G SATA, which I believe have had no failures (6.6Mhours!).
the hardware was all assembled and burned in quite a while
before being delivered, so this should be the bottom of the bathtub.
during some periods, machineroom temperatures have not been
exceptionally well-regulated :(
More information about the Beowulf
mailing list