Killer SCSI 1 TB fileserver

Robert G. Brown rgb at phy.duke.edu
Thu Oct 25 10:37:31 PDT 2001


On Wed, 24 Oct 2001, Bill Broadley wrote:

Bill, approximately what did all of that cost?  If you wrote it below or
before, forgive me -- I missed it.

   rgb

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu


> 
> Heh, well at least it's fitting our needs quite well.  Being something
> along the lines of:
> 	quick to deploy
> 	very reliable
> 	high performance (mostly linear access)
> 	very reliable
> 	less expensive then the turn key solutions
> 	very reliable
> 
> Did I mention reliable?
> 
> We ended up (BTW, I have no affiliation with any hardware seller):

> 
> Enclosure:
> 	Kingston data silo DS-500	
> 	Dual power supplies
> 	Excellent airflow
> 	Great cable routing
> 	Spare power supply on shelf
> 	http://www.storcase.com/dsilo/ds500.asp
> 	Very Well built
> 	Hot swappable fans
> 	Good cable access
> 	handles 4 U160 busses (we are using 3).
> 
> 4 Disk modules (1 spare):
> 	http://enlightcorp.com/data_storage/8720_drive.shtml
> 	Turns 3 5.25's into 5 x 1" hotswap bays
> 	Hot swap fans (4 per module)
> 	Fan monitoring
> 	Drive monitoring
> 	Temp monitoring
> 	Combined with the DS-500 >= 14 fans all front-back
> 
> 16 Drives (1 spare):
> 	Seagate 1" U160 LVD (80 pin) 73 GB drives
> 	Jumpered to poweron every 12 seconds or so
> 
> 4 Kingston internal U160 cables (1 spare)
> 
> 4 Kingston external U160 cables (1 spare)
> 
> Adaptec 39160 in a 64 bit pci slot of a tyan thunder motherboard.
> 
> Performance in a Raid 5 over 10 disks (86 MB/sec write, 129 MB/sec read):
> 
> Version  @version@      ------Sequential Output------ --Sequential Input-
> +--Random-
>                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
> beo           6240M           86870  77 42181  67           129498  67 526.9   4+------Sequential Create------ --------Random Create--------
>                     -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
> files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> beo              24   920  99 +++++ +++ 31621  96   910  99 +++++ +++  2660  99
> 
> ===========================================================================
> Raid-0 performance: (114 MB/sec write, 130 MB/sec read)
> 
> beo,6240M,,,114721,89,56306,64,,,131574,63,534.7,2,8,7183,99,+++++,+++,+++++,++++,7393,100,+++++,+++,+++++,+++
> Version  @version@      ------Sequential Output------ --Sequential Input-
> +--Random-
>                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
> beo           6240M           114721  89 56306  64           131574  63 534.7
> +2
>                     ------Sequential Create------ --------Random Create--------
>                     -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
> files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> beo               8  7183  99 +++++ +++ +++++ +++  7393 100 +++++ +++ +++++ +++
> 
> In any case we are very pleased with the result, I hope it's as stable
> as the last linux raid-5 fileserver I setup that had an uptime of over 400
> days.
> 
> BTW I did build a 3ware 6800 + 8 EIDE configuration that was a complete
> failure, numerous crashes, multiple filesystems lost, very unsatifactory 
> response from 3ware (I.e. Oh yeah, common problem, call back next month),
> no surprise I guess that they are leaving the Raid+eide market (or so I
> hear anyways.
> 
> Despite my most optimistic hopes we abandoned the eide raid and are very
> happy so far with our 1 TB or so of disk.
> 
> Just figured I'd share with the list in case anyone else was looking
> for something similar.  Our new 24 node (48 cpu) dual athlon + myrinet
> cluster is doing very well, report later.
> 





More information about the Beowulf mailing list