[Beowulf] [Bioclusters] notes and pictures from a "wet lab baby-biocluster" project (fwd from dag at sonsorol.org)

Eugen Leitl eugen at leitl.org
Thu Mar 17 12:36:29 PST 2005


----- Forwarded message from Chris Dagdigian <dag at sonsorol.org> -----

From: Chris Dagdigian <dag at sonsorol.org>
Date: Thu, 17 Mar 2005 14:42:10 -0500
To: "Clustering,  compute farming & distributed computing in life science informatics" <bioclusters at bioinformatics.org>
Subject: [Bioclusters] notes and pictures from a "wet lab baby-biocluster"
	project
Organization: Bioteam Inc. 
User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US;
	rv:1.7.5) Gecko/20041217
Reply-To: "Clustering,  compute farming & distributed computing in life science informatics" <bioclusters at bioinformatics.org>


I've had a blast the past few days doing rack-and-stack work that I 
normally don't get to do much anymore. Rough notes and a link to the 
images follow...

The pictures:
-------------

 http://bioteam.net/gallery/wetlabcluster

The challenge:
--------------

In 12 days or less, design a cluster, source the parts and put it 
together in good working order. The cluster must meet the following 
requirements:

- Capable of operating in a wet lab setting
- Managed and operated by biologists
- Linux OS required (software dependencies ...)
- Require no more than 2x 20-amp power circuits
- ~ 4 terabyte raw storage requirement; HA or super-performance not a 
requirement
- Quieter than the instruments surrounding it
- Small enough to (roughly) fit under a lab bench
- Have sufficient CPU power to meet analytical needs
- Capable of automatically processing data coming off one or more high 
end instruments

The components:
---------------

I can't share details about the requirements gathering phase of the 
project. We studied the instrument, the science and the stuff that 
needed to be done with the data coming off the instrument and determined 
that approx. six dual-processor boxes with AMD Opteron CPUs would be 
acceptable. Under a massive time crunch and some components were ordered 
purely on the basis of "how fast can you ship to us..."

The parts list boiled down to the following pieces:

From CDW.com with rush delivery :)

 - Digi CM 16 serial console server
 - Pair of 20-amp APC rack-mount power distribution units
 - Dirt cheap SMC 24-port gigabit ethernet unmanaged switch
 - Box of serial DB9 to cat5 RJ45 adaptors for serial console
 - Bulk quantities of 5ft grey cat5e cables (no time for special colors 
or lengths)

From IBM via a local reseller/integrator:
 - 7x IBM eSeries 326 1U rackmount dual-Opteron servers (6 nodes + master)

From Apple:
 - Apple Xserve RAID with 14x 400gb drives
 - Apple PCI-X fiber channel HBA card & cables
 - Xserve RAID spare parts kit

From Extrememac.com:

 - Small form factor 12U "XRack Pro2" cabinet (http://www.xrackpro.com/)


The problems:
-------------

The biggest overall problem was that the Apple Xserve RAID was ordered 
with Fedex shipping but without priority delivery. This means that the 
storage arrived at 5pm the night before our final cluster-assembly work 
day. It also arrived with damaged rackmount rails but the damage was not 
enough to make the hardware unusable.

Even worse, the cluster cabinet arrived at 1pm *on* our final work day. 
This was in spite of the fact that the cabinet had been ordered via 
credit card directly from Extrememac 7 or 8 days prior. As a vendor, 
they were not really on the ball with things but this could be normal 
for a company that seems to mostly make iPod accessories. Hopefully just 
a fluke experience.

The IBM hardware arrived quickly and the reseller/integrator did a good 
job. A minor hassle was that we had to order 15,000RPM Ultra320 scsi 
drives because the cheaper 10,000RPM drive were on some sort of IBM 
global "short supply" list.

The biggest problem with IBM and the reason I'll probably never purchase 
eSeries servers again is that apparently IBM refuses to sell any sort 
generic rail mounting kits for the e-series product line (this is what 
the integrator told me; have not verified this yet). They ship with rail 
kits that *only* work in IBM branded server cabinets. Given that we were 
installing into a non-IBM 12U cabinet this was a big issue. Our 
integrator found a 3rd party rail reseller that makes compatible rails 
but we could not order them in time. To me this is just annoying and (if 
true) due to the annoyance factor I'll probably buy my dual Opterons 
from Sun in the future (assuming Sun will sell me a generic rail kit...)

Final thoughts:
---------------

The 64bit version of Suse 9.2 Professional handled the fibre channel 
storage amazingly cleanly. It detected, mounted and provisioned the 2 
Apple RAID LUNs into a LVM group with no problem at all. I was expecting 
the Linux -> Apple RAID stuff to be a bit more scary.

I really like the XRack Pro2 cluster cabinet or whatever it's marketing 
name is. Well assembled with good options for choosing between quiet vs 
cooling. There is plenty of space for wiring and cable runs even if all 
12U are packed with equipment. We have everything powered up today and 
working hard and are monitoring the temperature conditions internally.

The Xserve RAID is one the quietest storage arrays I've ever seen - I 
thought it would be louder than the IBM rack-mounts but this is not the 
case.

The biggest liability in this cluster is the lack of an internal UPS 
capable of cleanly shutting down the Xserve RAID chassis. There was 
simply no more room in the cabinet. We'll do external UPS for now and if 
we can squeeze out 1 compute node there is the possibility of installing 
one of the 1U UPS systems made by APC.




-Chris









-- 
Chris Dagdigian, <dag at sonsorol.org>
BioTeam  - Independent life science IT & informatics consulting
Office: 617-665-6088, Mobile: 617-877-5498, Fax: 425-699-0193
PGP KeyID: 83D4310E iChat/AIM: bioteamdag  Web: http://bioteam.net
_______________________________________________
Bioclusters maillist  -  Bioclusters at bioinformatics.org
https://bioinformatics.org/mailman/listinfo/bioclusters

----- End forwarded message -----
-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a>
______________________________________________________________
ICBM: 48.07078, 11.61144            http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org         http://nanomachines.net
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20050317/483eb742/attachment.sig>


More information about the Beowulf mailing list