[Beowulf] PetaBytes on a budget, take 2
Bob Drzyzgula
bob at drzyzgula.org
Sat Jul 23 06:13:50 PDT 2011
Getting back to the original question, I will say that I,
as I expect most of of did, of course considered these
back when the first version came out. However, I rejected
them based on a few specific criticisms:
1. The power supplies are not redundant.
2. The fans are nut redundant.
3. The drives are inaccessible without shutting
down the system and pulling the whole chassis.
For my application (I was building a NAS device, not a
simple rsync target) I was also unhappy with the choice
of motherboard and other I/O components, but that's a YMMV
kind of thing and could easily be improved upon within
the same chassis.
FWIW, for chassis solutions that approach this level
of density, but still offer redundant power & cooling
as well as hot-swap drive access, Supermicro has
a number of designs that are probably worth considering:
http://www.supermicro.com/storage/
In the end we built a solution using the 24-drive-in-4U
SC848A chassis; we didn't go to the 36-drive boxes because
I didn't want to have to compete with the cabling on the
back side of the rack to access the drives, and anyway our
data center is cooling-constrained and thus we have rack
units to spare. We put motherboards in half of them and use
the other half in a JBOD configuration. We also used 2TB,
7200 rpm "Enterprise" SAS drives, which actually aren't
all that much more expensive. Finally, we used Adaptec
SSD-cacheing SAS controllers. All of this is of course
more expensive than the parts in the Backblaze design,
but that money all goes toward reliability, manageability
and performance, and it still is tremendously cheaper
than an enterprise SAN-based solution. Not to say that
enterprise SANs don't have their place -- we use them for
mission-critical production data -- but there are many
applications for which their cost simply is not justified.
On 21/07/11 12:28 -0400, Ellis H. Wilson III wrote:
>
> I have doubts about the manageability of such large data without complex
> software sitting above the spinning rust to enable scalability of
> performance and recovery of drive failures, which are inevitable at this
> scale.
Well, yes, from a software perspective this is true, and
that's of course where most of the rest of this thread
headed, which I did find interesting in useful. But if
one assumes a appropriate software layers, I think that
this remains an interesting hardware design question.
> I mean, what is the actual value of this article? They really don't
> tell you "how" to build reliable storage at that scale, just a
> hand-waving description on how some of the items fit in the box and a
> few file-system specifics. THe SATA wiring diagram is probably the most
> detailed thing in the post and even that leaves a lot of questions to be
> answered.
Actually I'm not sure you read the whole blog post. They
give extensive wiring diagrams for all of if, including
detailed documentation of the custom harness for the power
supplies. They also give a a complete parts list -- down to
the last screw -- and links to suppliers for unusual or
custom parts as well as full CAD drawings of the chassis,
in SolidWorks (a free viewer is available). Not quite sure
what else you'd be looking for -- at least from a hardware
perspective.
I do think that this is an interesting exercise in finding
exactly how little hardware you can wrap around some hard
drives and still have a functional storage system. And
as Backblaze seems to have built a going concern on top of
the design it does seem to have its applications. However,
I think one has to recognize its limitations and be very
careful to not try to push it into applications where the
lack of redundancy and manageability are going to come up
and bite you on the behind.
--Bob
More information about the Beowulf
mailing list