<div dir="ltr"><div>Joe, sorry to split the thread here. I like BeeGFS and have set it up.</div><div>I have worked for two companies now who have sites around the world, those sites being independent research units. But HPC facilities are in headquarters.</div><div>The sites want to be able to drop files onto local storage yet have it magically appear on HPC storage, and same with the results going back the other way.</div><div><br></div><div>One company did this well with GPFS and AFM volumes.</div><div>For the current company, I looked at gluster and Gluster geo-replication is one way only.</div><div>What do you know of the BeeGFS mirroring? Will it work over long distances? (Note to me - find out yourself you lazy besom)<br></div></div><br><div class="gmail_quote"><div dir="ltr">On Tue, 24 Jul 2018 at 16:59, Joe Landman <<a href="mailto:joe.landman@gmail.com">joe.landman@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
<br>
On 07/24/2018 10:31 AM, John Hearns via Beowulf wrote:<br>
> Forgive me for saying this, but the philosophy for software defined <br>
> storage such as CEPH and Gluster is that forklift style upgrades <br>
> should not be necessary.<br>
> When a storage server is to be retired the data is copied onto the new <br>
> server then the old one taken out of service. Well, copied is not the <br>
> correct word, as there are erasure-coded copies of the data. <br>
> Rebalanced is probaby a better word.<br>
<br>
This ^^<br>
<br>
I'd seen/helped build/benchmarked some very nice/fast CephFS based <br>
storage systems in $dayjob-1. While it is a neat system, if you are <br>
focused on availability, scalability, and performance, its pretty hard <br>
to beat BeeGFS. We'd ($dayjob-1) deployed several very large/fast file <br>
systems with it on our spinning rust, SSD, and NVMe units.<br>
<br>
<br>
-- <br>
Joe Landman<br>
e: <a href="mailto:joe.landman@gmail.com" target="_blank">joe.landman@gmail.com</a><br>
t: @hpcjoe<br>
w: <a href="https://scalability.org" rel="noreferrer" target="_blank">https://scalability.org</a><br>
g: <a href="https://github.com/joelandman" rel="noreferrer" target="_blank">https://github.com/joelandman</a><br>
l: <a href="https://www.linkedin.com/in/joelandman" rel="noreferrer" target="_blank">https://www.linkedin.com/in/joelandman</a><br>
<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</blockquote></div>