[Beowulf] GPFS on Linux (x86)
Craig Tierney
ctierney at hypermall.net
Thu Sep 14 17:59:47 PDT 2006
Mark Hahn wrote:
>> If someone would be so kind as to help me find *real* data that
>> demonstrates higher SATA/IDE failure rates as compared to SCSI, I would
>> most appreciate it.
>
> I have only two very wimpy factoids to offer: my 70TB HP SFS
> disk array (36 SFS20 shelves with 11x 250G SATA each) has had just one
> bad disk since it was installed (say, March).
> so that's one disk in 1.7Mhours, aggregated, actually a lower rate than
> I would have expected...
I never see disks fail this way. Go unplug the array and turn it back
on. Tell me how many disk fail then. :-)
Craig
>
> that storage is attached to 768 compute nodes, each of which has 2x80G
> SATA, which I believe have had no failures (6.6Mhours!).
>
> the hardware was all assembled and burned in quite a while
> before being delivered, so this should be the bottom of the bathtub.
>
> during some periods, machineroom temperatures have not been
> exceptionally well-regulated :(
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf
mailing list