<div dir="ltr"><span style="font-size:small;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">Does anyone have any experience with how BeeGFS compares to Lustre? We're looking at both of those for our next generation HPC storage system. <br><br>Is CephFS a valid option for HPC now? Last time I played with CephFS it wasn't ready for prime time, but that was a few years ago.</span><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jul 24, 2018 at 10:58 AM, Joe Landman <span dir="ltr"><<a href="mailto:joe.landman@gmail.com" target="_blank">joe.landman@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
On 07/24/2018 10:31 AM, John Hearns via Beowulf wrote:<br>
</span><span class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Forgive me for saying this, but the philosophy for software defined storage such as CEPH and Gluster is that forklift style upgrades should not be necessary.<br>
When a storage server is to be retired the data is copied onto the new server then the old one taken out of service. Well, copied is not the correct word, as there are erasure-coded copies of the data. Rebalanced is probaby a better word.<br>
</blockquote>
<br></span>
This ^^<br>
<br>
I'd seen/helped build/benchmarked some very nice/fast CephFS based storage systems in $dayjob-1. While it is a neat system, if you are focused on availability, scalability, and performance, its pretty hard to beat BeeGFS. We'd ($dayjob-1) deployed several very large/fast file systems with it on our spinning rust, SSD, and NVMe units.<span class="HOEnZb"><font color="#888888"><br>
<br>
<br>
-- <br>
Joe Landman<br>
e: <a href="mailto:joe.landman@gmail.com" target="_blank">joe.landman@gmail.com</a><br>
t: @hpcjoe<br>
w: <a href="https://scalability.org" rel="noreferrer" target="_blank">https://scalability.org</a><br>
g: <a href="https://github.com/joelandman" rel="noreferrer" target="_blank">https://github.com/joelandman</a><br>
l: <a href="https://www.linkedin.com/in/joelandman" rel="noreferrer" target="_blank">https://www.linkedin.com/in/jo<wbr>elandman</a></font></span><div class="HOEnZb"><div class="h5"><br>
<br>
______________________________<wbr>_________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman<wbr>/listinfo/beowulf</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">James Burton<div>OS and Storage Architect</div><div>Advanced Computing Infrastructure</div><div>Clemson University Computing and Information Technology</div><div>340 Computer Court</div><div>Anderson, SC 29625</div><div>(864) 656-9047</div></div></div>
</div>