[Beowulf] GPFS on Linux (x86)

Michael Huntingdon hunting at ix.netcom.com
Thu Sep 14 19:02:15 PDT 2006


If you really want to see MTBF numbers for both SCSI and 
SATA....given your previous investment with HP, they'll push back, 
but they will give them up. I'm assuming you are working directly with HP.

~m

At 04:53 PM 9/14/2006, Mark Hahn wrote:
>>If someone would be so kind as to help me find *real* data that
>>demonstrates higher SATA/IDE failure rates as compared to SCSI, I would
>>most appreciate it.
>
>I have only two very wimpy factoids to offer: my 70TB HP SFS
>disk array (36 SFS20 shelves with 11x 250G SATA each) has had just 
>one bad disk since it was installed (say, March).
>so that's one disk in 1.7Mhours, aggregated, actually a lower rate 
>than I would have expected...
>
>that storage is attached to 768 compute nodes, each of which has 
>2x80G SATA, which I believe have had no failures (6.6Mhours!).
>
>the hardware was all assembled and burned in quite a while
>before being delivered, so this should be the bottom of the bathtub.
>
>during some periods, machineroom temperatures have not been 
>exceptionally well-regulated :(
>_______________________________________________
>Beowulf mailing list, Beowulf at beowulf.org
>To change your subscription (digest mode or unsubscribe) visit 
>http://www.beowulf.org/mailman/listinfo/beowulf





More information about the Beowulf mailing list