Linux Software RAID5 Performance

Michael Prinkey mikeprinkey at hotmail.com
Wed Apr 3 11:49:00 PST 2002


Indeed, the multiple processes accessing the device made significantly 
degrade performance.  Fortunately for us, as well, access speed is limited 
by the NFS/SMB and the network, not by array performance.  Unfortunately, 
the unit is online now and I can't fiddle around with the settings and test 
it further.

WRT reliability, we have seen the array drop to degraded mode because of a 
single drive failure.  We have also a single drive take down the entire IDE 
port.  This results in the md device disappearing until you swap out the 
offending drive and restart the array.  There is no data here.  Usually one 
drive goes and the array goes into degraded mode and starts reconstructing 
on the spare.  Then the second goes and the array disappears.  It is a bit 
disconcerting to do ls /raid and get nothing back.  Changing out the drive 
and restarting pulls everything back.

I can honestly say that the only data loss that I have had on these arrays 
came when a maintenance person completely unplugged one of the arrays from 
the UPS.  It caused low-level corruption on 5 of the 9 drives in the array.  
We ended up using a Windows 98 boot floppy with Maxtor's Powermax utility to 
patch them all back up.  It took many hours.  This is the WORST possible 
scenario, BTW.  Even reseting the system gives the EIDE devices a chance to 
flush their caches and maintain low-level integrity.  Cutting the power can 
leave the array/drives inconsistent on the filesystem, device (/dev/md0), 
and hardware-format datagram levels.  So, lock your arrays in a cabinet!  8)

Mike

>From: Jurgen Botz <jurgen at botz.org>
>To: mprinkey at aeolusresearch.com (Michael Prinkey)
>CC: beowulf at beowulf.org
>Subject: Re: Linux Software RAID5 Performance
>Date: Wed, 03 Apr 2002 10:25:31 -0800
>
>Michael Prinkey wrote:
> > Again, performance (see below) is remarkably good, especially 
>considering
> > all of the strikes against this configuration:  EIDE instead of SCSI, 
>UDMA66
> > instead of 100/133, 5400-RPM instead of 7200-RPM, and master/slave 
>drives on
> > each port instead of a single drive per port.
>
>With regard to the master/slave config... I note that your performance
>test is a single reader/writer... in this config with RAID5 I would
>expect the performance to be quite good even with 2 drives per IDE
>controller.  But if you have several processes doing disk I/O
>simultaneously you should see a rather more precipitous drop in
>performance than you would with a single drive per IDE controller.
>I'm working on testing a very similar config right now and that's
>one of my findings (which I had expected) but our application for this
>is not very performance sensitive so it's not a big deal.
>
>A more important issue for me is reliability, and I'm somewhat
>concerned about failure modes.  For example, can an IDE drive fail
>in such a way that if will disable the controller or the other
>drive on the same controller?  If so, that would seriously limit
>the usefulness of RAID5 in this config.  In general how good is
>Linux software RAID's failure handling?  Etc.
>
>:j
>
>
>--
>Jürgen Botz                       | While differing widely in the various
>jurgen at botz.org                   | little bits we know, in our infinite
>                                   | ignorance we are all equal. -Karl 
>Popper
>
>
>_______________________________________________
>Beowulf mailing list, Beowulf at beowulf.org
>To change your subscription (digest mode or unsubscribe) visit 
>http://www.beowulf.org/mailman/listinfo/beowulf
>


_________________________________________________________________
MSN Photos is the easiest way to share and print your photos: 
http://photos.msn.com/support/worldwide.aspx




More information about the Beowulf mailing list