[Beowulf] Lustre Upgrades
James Burton
jburto2 at g.clemson.edu
Tue Jul 24 19:19:43 PDT 2018
Does anyone have any experience with how BeeGFS compares to Lustre? We're
looking at both of those for our next generation HPC storage system.
Is CephFS a valid option for HPC now? Last time I played with CephFS it
wasn't ready for prime time, but that was a few years ago.
On Tue, Jul 24, 2018 at 10:58 AM, Joe Landman <joe.landman at gmail.com> wrote:
>
>
> On 07/24/2018 10:31 AM, John Hearns via Beowulf wrote:
>
>> Forgive me for saying this, but the philosophy for software defined
>> storage such as CEPH and Gluster is that forklift style upgrades should not
>> be necessary.
>> When a storage server is to be retired the data is copied onto the new
>> server then the old one taken out of service. Well, copied is not the
>> correct word, as there are erasure-coded copies of the data. Rebalanced is
>> probaby a better word.
>>
>
> This ^^
>
> I'd seen/helped build/benchmarked some very nice/fast CephFS based storage
> systems in $dayjob-1. While it is a neat system, if you are focused on
> availability, scalability, and performance, its pretty hard to beat
> BeeGFS. We'd ($dayjob-1) deployed several very large/fast file systems
> with it on our spinning rust, SSD, and NVMe units.
>
>
> --
> Joe Landman
> e: joe.landman at gmail.com
> t: @hpcjoe
> w: https://scalability.org
> g: https://github.com/joelandman
> l: https://www.linkedin.com/in/joelandman
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
--
James Burton
OS and Storage Architect
Advanced Computing Infrastructure
Clemson University Computing and Information Technology
340 Computer Court
Anderson, SC 29625
(864) 656-9047
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20180724/f8c6e014/attachment-0001.html>
More information about the Beowulf
mailing list