[Beowulf] Re: real hard drive failures

Alvin Oga alvin at Mail.Linux-Consulting.com
Sun Jan 30 19:18:03 PST 2005


On Sun, 30 Jan 2005, Maurice Hilarius wrote:

> Some observations:

yup  .. and raid is fun and easy .. and i agree with all 
as long as the assumptions is the same, which it is ..

> >Date: Tue, 25 Jan 2005 13:42:05 -0800 (PST)
> >From: Alvin Oga <alvin at Mail.Linux-Consulting.com>
> >
> >i'd add 1 or 2 cooling fans per ide disk, esp if its 7200rpm or 10,000 rpm
> >disks 
> >  
> >
> Adding fans makes some assumptions:
> 1) There is inadequate chassis cooling in the first place. If that is 
> the case, one should consider a better chassis.

there are very few "better" chassis than the ones we use

<another view>
and the other view point is "fans" is the insurance policy that
the disks will last longer than if it didnt have the fans
</view>

> If the drives are not being cooled, then what else is also not properly 
> cooled?

obviously the cpu and air etc.etc..etc...
	- most 1U, 2U, midtower, full tower chassis fail our tests
 
> 2) To add a fan effectively, one must have sufficient input of outside 
> air, and sufficient exhaust capacity in the chassis to move out the 
> heated air. In my experience the biggest deficiency in most chassis is 
> in the latter example.

exactly ... and preferrably cooler air ...

some of our custoemrs have a closed system and the ambient temp
is 150F .. why they designed silly video systems like that is
little whacky ...

> Simply adding fans on the front input side, without sufficient exhaust 
> capacity adds little real air flow.

there must be more exhasust hoels than intake holes

- the chassis should be COLD(cool) to the touch

> Think of most chassis as a funnel. You can only push in as much air as 
> there is capacity for it to escape at the back.

funnel with blockages and bends and 90 degree change in directions

> More fans do not add much more flow, 

depends on your chassis design and the fan and intake and exhaust
and position of the fans... blah .. blah

> 3) Adding fans requires some place to mount them so that the airflow 
> passes over the hard disks.
> Most chassis used in clusters do not provide that space and location.

and looking at how people mount hard disks ... they probably dont
care about the life fo the data on the disks ... so we avoid those
vendors/cases/chassis ...
 
> 4) Adding fans often creates some additional maintenance issues and 
> failure points.

fans failing is cheap compared to disks dying

and even better, buy better fans .... we dont see as many fan failures
compared to the average bear ( the machines at colo's all have 50% or 80%
fan failures ... usually the stuff we dont use .. good to see
reinforcment that we wont use those cheap fans )
 
> Maxtor drives have had very high failure rates in recent (3) years. That 
> probably prompted them to lead the rush to 1 year warranties 2.5 years 
> ago.

yup

> WD did very well in the market by keeping the 3 year Special 
> Edition drives available, and recently Seagate, then Maxtor came back to 
> add longer warranties, now generally 5 years.

competition is good ... if people looked at warranty period beore
buying

> What is telling is that their product does not seem to have been 
> improved in design reliability. This is ALL about marketing.

marketing or gambling that the disk will outlive the "warranty" period
and/or that the costs of the warranty replacement disks that dies that
have to be replaced will be cheaper than the loss of market share

> What is also worth considering is the question of will the company will 
> be around in 5 years to honor that warranty. With Seagate and Maxtor on 
> a diet of steady losses for at least 3 years it is worth considering. 
> WD, OTOH, have been making profit while selling 3-5 year warranty drives.

"spinning didks" is a dead market ... 
	- remember, ibm sold off that entire division to hitachi
	so their days are number ... which also obvious from watching
	how cheap 1GB and 2GB compact flash is  and it's just a matter
	of time before it's 100GB CFs but is it fast enuff .. 
	1GB/sec sustained data transfer

> Average. WD are slightly more reliable in our experience ( we sell 
> several thousand drives a year).
>  As long as you stick to JB, JD, or SD models.

the 8MB versions ... the WD w/ 2MB versions made us buy
seagate/maxtor/quantum insted and good thing we gave the 8MB
buffers a try :-)

> Hitachi and Seagate tie for 2nd, Maxtor are last.
> BTW, Hitachi took over the IBM drive business, but most of the product 
> line is new, so these are not the same as the older infamous "deathstar" 
> drives.

:-)
 
> >== using 4 drive raid is better ... but is NOT the solution ==
> >
> >	- configuring raid is NOT cheap ...
> >  
> >
> Why?

takes people time to properly config raid ...
	- most raids that are self built are misconfigured

	( these are hands off tests, other than pull the disks )

	- i expect raid to be able to boot with any disk pulled out
	- i expect raid to resync automagically

> Most modern boards support 4 IDE devices and 4 S-ATA devices.

i haven't found a mb raid system that works right ..
so we ise sw raid instead for those that dont want to use
ide-raid cards ( more $$$ )

> Using mdadm to configure and maintain a RAID is trivial.

yes is it trivial .. to build and configure and setup ..

	- just takes time to test and if something went bonkers
	you have to rebuild and re-test b4 shipping

	- 1 day testing is worthless ... 7day or a month of burnin
	testing is a good thing to make sure they don't lose
	2TB of data 

	( and always have 2 or 3 independently backuped data storage )

> Onboard "RAID" on integrated controllers is not standardized, and is 
> usually limited to RAID 0 and 1, whereas software RAID allows RAID 5, 6, 
> and mixed RAID types on the same disks.

yup

> I disagree. You have no downtime on a RAID if you incorporate a 
> redundant RAID scheme. If the interface supports swapping out disks you 
> need never shut down to deal with a failed disk.

and that the drives is hotswappable 

if its not hotswap, you will have to shutdown to replace the dead disk

and if downtime is important, they will have 2 or 3 systems up
24x7x365 with redundant data sync'd and saved in NY,LA,Miami ..
or wherever

> If you have to change drives immediately when they fail, maybe you do 
> need a better controller.

or better disks ... :-)

and disks should NOT fail before the power supply or fans ..
------------------------------------------------------------

> OTOH, shutdown time to change a disk on a decent chassis is under 1 minute.
> Depends on your needs.

and to resync that disk takes time ... and if during resync, a 2nd disks
decides to go on vacation, you would be in a heap a trouble
 
> >	- raid will NOT prevent your downtime, as that raid box
> >	will have to be shutdown sooner or later 
> >  
> >
> Simply not true. As long as the controller supports removing and adding 
> devices, and as long as your chassis has disk trays to support hot-swap, 
> there is ZERO downtime.

as long as those "additional" constraints are met, and that other
assumptions are also intact, yeah .. zero downtime is possible

but i get the calls when those raids die that they bought elsewhere
and there's nothing i can do, sicne they don't have backups either

> If you have redundant RAID you can delay the shutdown until the time 
> that is convenient to you. You have to shut down for some form of 
> scheduled maintenance at least once in a while.

exactly .. about redundancy ... in the server and multiple servers

> Price penalty is fairly light.

yes ... very inexpensive

> For example, our 1U cluster node chassis have 4 hotswap S-ATA or SCSI 
> trays, redundant disk cooling fans, and you can add a 4 port 3Ware 
> controller and you pay a price premium of only $280. Not including extra 
> disks, of course.
> What is downtime worth to you is the main question YOU have to answer..

that is the problem.. some think that all that "extra" prevention
and precautionary measures is not worth it to them .. until afterward

== summary ... 
	=
	= good fans makes all the difference in the world in a good chasis
	= good disks and good vendors and good suppliers helps even more
	=
	= in my book, ide disks will not die (within reason) ... and if it
	= does, usually,  there's a bigger problem
	=

have fun
alvin




More information about the Beowulf mailing list