[Beowulf] [zfs] Re: [zfs-discuss] Petabyte pool?
Jeff White
jaw171 at pitt.edu
Tue Mar 19 05:47:44 PDT 2013
On 03/18/2013 02:22 PM, Hearns, John wrote:
> We've come close:
>
> admin at mes-str-imgnx-p1:~$ zpool list
> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
> datapool 978T 298T 680T 30% 1.00x ONLINE -
> syspool 278G 104G 174G 37% 1.00x ONLINE -
>
> Using a Dell R720 head unit, plus a bunch of Dell MD1200 JBODs dual
> pathed to a couple of LSI SAS switches.
>
> Using Nexenta but no reason you couldn't do this w/ $whatever.
>
> We did triple parity and our vdev membership is set up such that we can
> lose up to three JBODs and still be functional (one vdev member disk
> per JBOD).
>
> This is with 3TB NL-SAS drives.
>
>
> That's very interesting.
> My kneejerk reaction is always to say 'data does not exist unless you have two copies of it' -
> ie. You should always make sure there are two copies of data on separate media.
>
> In this setup though it looks like you can achieve substantially that result, without mirroring between
> two completely separate ZFS servers.
> Being able to lose three JBODs without overall losing data is interesting - can we find out more about this setup?
>
> The contents of this email are confidential and for the exclusive use of the intended recipient. If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy.
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
At my site we tried using GlusterFS to glue together similar Supermicro
boxes full of drives. The design never made it to production as I was
able to cause it to die or become split brained in the dev environment.
You may have better luck than me if you want to try it, just my $0.02 on
a piece of software which purports to do what you are looking at. I
does have a feature which mirrors between two servers as you mentioned
though I doubt it runs on Solaris though (I used RHEL 6 with XFS).
More information about the Beowulf
mailing list