[Beowulf] Lustre Upgrades

Fred Youhanaie fly at anydata.co.uk
Tue Jul 24 11:20:42 PDT 2018

Nah, that ain't large scale ;-) If you want large scale have a look at snowmobile:


They drive a 45-foot truck to your data centre, fill it up with your data bits, then drive it back to their data centre :-()


On 24/07/18 19:04, Jonathan Engwall wrote:
> Snowball is the very large scale AWS data service.
> On July 24, 2018, at 8:35 AM, Joe Landman <joe.landman at gmail.com> wrote:
> On 07/24/2018 11:06 AM, John Hearns via Beowulf wrote:
>> Joe, sorry to split the thread here. I like BeeGFS and have set it up.
>> I have worked for two companies now who have sites around the world,
>> those sites being independent research units. But HPC facilities are
>> in headquarters.
>> The sites want to be able to drop files onto local storage yet have it
>> magically appear on HPC storage, and same with the results going back
>> the other way.
>> One company did this well with GPFS and AFM volumes.
>> For the current company, I looked at gluster and Gluster
>> geo-replication is one way only.
>> What do you know of the BeeGFS mirroring? Will it work over long
>> distances? (Note to me - find out yourself you lazy besom)
> This isn't the use case for most/all cluster file systems.   This is
> where distributed object systems and buckets rule.
> Take your file, dump it into an S3 like bucket on one end, pull it out
> of the S3 like bucket on the other.  If you don't want to use get/put
> operations, then use s3fs/s3ql.  You can back this up with replicating
> EC minio stores (will take a few minutes to set up ... compare that to
> others).
> The down side to this is that minio has limits of about 16TiB last I
> checked.   If you need more, replace minio with another system (igneous,
> ceph, etc.).  Ping me offline if you want to talk more.
> [...]

More information about the Beowulf mailing list