[Beowulf] Considering BeeGFS for parallel file system

Prentice Bisbal pbisbal at pppl.gov
Mon Mar 18 09:02:03 PDT 2019


Will,

Several years ago,when I was at Rutgers, Joe Landman's company, Scalable 
Informatics (RIP), was trying to sell be on BeeGFS over Lustre and GPFS. 
At the time, I was not interested. Why not? BeeGFS was still relatively 
new, and Lustre  and GPFS had larger install bases, and therefore bigger 
track records. I was the only System admin in a group with aspirations 
to be the one-stop shop for a very large research institution, and to 
become a national-level HPC center. As a result, I was more risk-adverse 
than I normally would be.  I didn't want to take a risk with a 
relatively unproven system, no matter how good it's performance was. I 
also wanted to use a system where there was an abundance of other sys 
admins with expertise I could lean on if I needed to.

Fast forward 4-5 years, and the situation has completely changed. At 
SC18, it seemed every booth was using or promoting BeeGFS, and everyone 
was saying good things about it. If were in the same situation today, I 
wouldn't hesitate to consider BeeGFS.

In fact, I feel bad for not giving it a closer look at the time, because 
it's clear Joe and his team at his late company were on to something and 
were clearly ahead of their time with promoting BeeGFS.

--
Prentice


On 3/18/19 11:50 AM, Will Dennis wrote:
>
> Hi all,
>
> I am considering using BeeGFS for a parallel file system for one (and 
> if successful, more) of our clusters here. Just wanted to get folks’ 
> opinions on that, and if there is any “gotchas” or better-fit 
> solutions out there... The first cluster I am considering it for has 
> ~50TB storage off a single ZFS server serving the data over NFS 
> currently; looking to increase not only storage capacity, but also I/O 
> speed. The cluster nodes that are consuming the storage have 10GbaseT 
> interconnects, as does the ZFS server. As we are a smaller shop, want 
> to keep the solution simple. BeeGFS was recommended to me as a good 
> solution off another list, and wanted to get people’s opinions off 
> this list.
>
> Thanks!
>
> Will
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit https://beowulf.org/cgi-bin/mailman/listinfo/beowul
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://beowulf.org/pipermail/beowulf/attachments/20190318/09b20f5a/attachment.html>


More information about the Beowulf mailing list