[Beowulf] Lustre Upgrades
Joe Landman
joe.landman at gmail.com
Tue Jul 24 07:58:20 PDT 2018
On 07/24/2018 10:31 AM, John Hearns via Beowulf wrote:
> Forgive me for saying this, but the philosophy for software defined
> storage such as CEPH and Gluster is that forklift style upgrades
> should not be necessary.
> When a storage server is to be retired the data is copied onto the new
> server then the old one taken out of service. Well, copied is not the
> correct word, as there are erasure-coded copies of the data.
> Rebalanced is probaby a better word.
This ^^
I'd seen/helped build/benchmarked some very nice/fast CephFS based
storage systems in $dayjob-1. While it is a neat system, if you are
focused on availability, scalability, and performance, its pretty hard
to beat BeeGFS. We'd ($dayjob-1) deployed several very large/fast file
systems with it on our spinning rust, SSD, and NVMe units.
--
Joe Landman
e: joe.landman at gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
More information about the Beowulf
mailing list