[Beowulf] [zfs] Re: [zfs-discuss] Petabyte pool?
Eugen Leitl
eugen at leitl.org
Mon Mar 18 07:57:39 PDT 2013
----- Forwarded message from Ray Van Dolson <rvandolson at esri.com> -----
From: Ray Van Dolson <rvandolson at esri.com>
Date: Fri, 15 Mar 2013 18:17:46 -0700
To: Marion Hakanson <hakansom at ohsu.edu>
Cc: zfs at lists.illumos.org, zfs-discuss at opensolaris.org
Subject: [zfs] Re: [zfs-discuss] Petabyte pool?
User-Agent: Mutt/1.5.21 (2010-09-15)
Reply-To: zfs at lists.illumos.org
On Fri, Mar 15, 2013 at 06:09:34PM -0700, Marion Hakanson wrote:
> Greetings,
>
> Has anyone out there built a 1-petabyte pool? I've been asked to look
> into this, and was told "low performance" is fine, workload is likely
> to be write-once, read-occasionally, archive storage of gene sequencing
> data. Probably a single 10Gbit NIC for connectivity is sufficient.
>
> We've had decent success with the 45-slot, 4U SuperMicro SAS disk chassis,
> using 4TB "nearline SAS" drives, giving over 100TB usable space (raidz3).
> Back-of-the-envelope might suggest stacking up eight to ten of those,
> depending if you want a "raw marketing petabyte", or a proper "power-of-two
> usable petabyte".
>
> I get a little nervous at the thought of hooking all that up to a single
> server, and am a little vague on how much RAM would be advisable, other
> than "as much as will fit" (:-). Then again, I've been waiting for
> something like pNFS/NFSv4.1 to be usable for gluing together multiple
> NFS servers into a single global namespace, without any sign of that
> happening anytime soon.
>
> So, has anyone done this? Or come close to it? Thoughts, even if you
> haven't done it yourself?
>
> Thanks and regards,
>
> Marion
We've come close:
admin at mes-str-imgnx-p1:~$ zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
datapool 978T 298T 680T 30% 1.00x ONLINE -
syspool 278G 104G 174G 37% 1.00x ONLINE -
Using a Dell R720 head unit, plus a bunch of Dell MD1200 JBODs dual
pathed to a couple of LSI SAS switches.
Using Nexenta but no reason you couldn't do this w/ $whatever.
We did triple parity and our vdev membership is set up such that we can
lose up to three JBODs and still be functional (one vdev member disk
per JBOD).
This is with 3TB NL-SAS drives.
Ray
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/22842876-6fe17e6f
Modify Your Subscription: https://www.listbox.com/member/?member_id=22842876&id_secret=22842876-a25d3366
Powered by Listbox: http://www.listbox.com
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
More information about the Beowulf
mailing list