[Beowulf] [zfs-discuss] Petabyte pool?

Eugen Leitl eugen at leitl.org
Mon Mar 18 07:56:35 PDT 2013


----- Forwarded message from Marion Hakanson <hakansom at ohsu.edu> -----

From: Marion Hakanson <hakansom at ohsu.edu>
Date: Fri, 15 Mar 2013 18:09:34 -0700
To: zfs at lists.illumos.org
Cc: zfs-discuss at opensolaris.org
Subject: [zfs-discuss] Petabyte pool?
X-Mailer: exmh version 2.7.2 01/07/2005 with nmh-1.3

Greetings,

Has anyone out there built a 1-petabyte pool?  I've been asked to look
into this, and was told "low performance" is fine, workload is likely
to be write-once, read-occasionally, archive storage of gene sequencing
data.  Probably a single 10Gbit NIC for connectivity is sufficient.

We've had decent success with the 45-slot, 4U SuperMicro SAS disk chassis,
using 4TB "nearline SAS" drives, giving over 100TB usable space (raidz3).
Back-of-the-envelope might suggest stacking up eight to ten of those,
depending if you want a "raw marketing petabyte", or a proper "power-of-two
usable petabyte".

I get a little nervous at the thought of hooking all that up to a single
server, and am a little vague on how much RAM would be advisable, other
than "as much as will fit" (:-).  Then again, I've been waiting for
something like pNFS/NFSv4.1 to be usable for gluing together multiple
NFS servers into a single global namespace, without any sign of that
happening anytime soon.

So, has anyone done this?  Or come close to it?  Thoughts, even if you
haven't done it yourself?

Thanks and regards,

Marion


_______________________________________________
zfs-discuss mailing list
zfs-discuss at opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

----- End forwarded message -----
-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE



More information about the Beowulf mailing list