[Beowulf] Suggestions to what DFS to use
Tony Brian Albers
tba at kb.dk
Tue Feb 14 05:16:38 PST 2017
On 2017-02-14 11:44, Jörg Saßmannshausen wrote:
> Hi John,
>
> thanks for the very interesting and informative post.
> I am looking into large storage space right now as well so this came really
> timely for me! :-)
>
> One question: I have noticed you were using ZFS on Linux (CentOS 6.8). What
> are you experiences with this? Does it work reliable? How did you configure the
> file space?
> From what I have read is the best way of setting up ZFS is to give ZFS direct
> access to the discs and then install the ZFS 'raid5' or 'raid6' on top of
> that. Is that what you do as well?
>
> You can contact me offline if you like.
>
> All the best from London
>
> Jörg
>
> On Tuesday 14 Feb 2017 10:31:00 John Hanks wrote:
>> I can't compare it to Lustre currently, but in the theme of general, we
>> have 4 major chunks of storage:
>>
>> 1. (~500 TB) DDN SFA12K running gridscaler (GPFS) but without GPFS clients
>> on nodes, this is presented to the cluster through cNFS.
>>
>> 2. (~250 TB) SuperMicro 72 bay server. Running CentOS 6.8, ZFS presented
>> via NFS
>>
>> 3. (~ 460 TB) SuperMicro 90 dbay JBOD fronted by a SuperMIcro 2u server
>> with 2 x LSI 3008 SAS/SATA cards. Running CentOS 7.2, ZFS and BeeGFS
>> 2015.xx. BeeGFS clients on all nodes.
>>
>> 4. (~ 12 TB) SuperMicro 48 bay NVMe server, running CentOS 7.2, ZFS
>> presented via NFS
>>
>> Depending on your benchmark, 1, 2 or 3 may be faster. GPFS falls over
>> wheezing under load. ZFS/NFS single server falls over wheezing under
>> slightly less load. BeeGFS tends to fall over a bit more gracefully under
>> load. Number 4, NVMe doesn't care what you do, your load doesn't impress
>> it at all, bring more.
>>
>> We move workloads around to whichever storage has free space and works best
>> and put anything metadata or random I/O-ish that will fit onto the NVMe
>> based storage.
>>
>> Now, in the theme of specific, why are we using BeeGFS and why are we
>> currently planning to buy about 4 PB of supermicro to put behind it? When
>> we asked about improving the performance of the DDN, one recommendation was
>> to buy GPFS client licenses for all our nodes. The quoted price was about
>> 100k more than we wound up spending on the 460 additional TB of Supermicro
>> storage and BeeGFS, which performs as well or better. I fail to see the
>> inherent value of DDN/GPFS that makes it worth that much of a premium in
>> our environment. My personal opinion is that I'll take hardware over
>> licenses any day of the week. My general grumpiness towards vendors isn't
>> improved by the DDN looking suspiciously like a SuperMicro system when I
>> pull the shiny cover off. Of course, YMMV certainly applies here. But
>> there's also that incident where we had to do an offline fsck to clean up
>> some corrupted GPFS foo and the mmfsck tool had an assertion error, not a
>> warm fuzzy moment...
>>
>> Last example, we recently stood up a small test cluster built out of
>> workstations and threw some old 2TB drives in every available slot, then
>> used BeeGFS to glue them all together. Suddenly there is a 36 TB filesystem
>> where before there was just old hardware. And as a bonus, it'll do
>> sustained 2 GB/s for streaming large writes. It's worth a look.
>>
>> jbh
That sounds very interesting, I'd like to hear more about that. How did
you manage to use zfs on centos ?
/tony
--
Best regards,
Tony Albers
Systems administrator, IT-development
Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark.
Tel: +45 2566 2383 / +45 8946 2316
More information about the Beowulf
mailing list