[Beowulf] Lustre Upgrades

Joe Landman joe.landman at gmail.com
Tue Jul 24 08:34:25 PDT 2018



On 07/24/2018 11:06 AM, John Hearns via Beowulf wrote:
> Joe, sorry to split the thread here. I like BeeGFS and have set it up.
> I have worked for two companies now who have sites around the world, 
> those sites being independent research units. But HPC facilities are 
> in headquarters.
> The sites want to be able to drop files onto local storage yet have it 
> magically appear on HPC storage, and same with the results going back 
> the other way.
>
> One company did this well with GPFS and AFM volumes.
> For the current company, I looked at gluster and Gluster 
> geo-replication is one way only.
> What do you know of the BeeGFS mirroring? Will it work over long 
> distances? (Note to me - find out yourself you lazy besom)

This isn't the use case for most/all cluster file systems.   This is 
where distributed object systems and buckets rule.

Take your file, dump it into an S3 like bucket on one end, pull it out 
of the S3 like bucket on the other.  If you don't want to use get/put 
operations, then use s3fs/s3ql.  You can back this up with replicating 
EC minio stores (will take a few minutes to set up ... compare that to 
others).

The down side to this is that minio has limits of about 16TiB last I 
checked.   If you need more, replace minio with another system (igneous, 
ceph, etc.).  Ping me offline if you want to talk more.

[...]

-- 
Joe Landman
e:joe.landman at gmail.com
t: @hpcjoe
w:https://scalability.org
g:https://github.com/joelandman
l:https://www.linkedin.com/in/joelandman



More information about the Beowulf mailing list