[Beowulf] Lustre Upgrades

Lux, Jim (337K) james.p.lux at jpl.nasa.gov
Thu Jul 26 13:49:15 PDT 2018

SO this is the modern equivalent of "nothing beats the bandwidth of a station wagon full of mag tapes"
It *is* a clever idea - I'm sure all the big cloud providers have figured out how to do a "data center in shipping container", and that's basically what this is.

I wonder what it costs (yeah, I know I can "Contact Sales to order a AWS Snowmobile"... but...)

Jim Lux
(818)354-2075 (office)
(818)395-2714 (cell)

-----Original Message-----
From: Beowulf [mailto:beowulf-bounces at beowulf.org] On Behalf Of Fred Youhanaie
Sent: Tuesday, July 24, 2018 11:21 AM
To: beowulf at beowulf.org
Subject: Re: [Beowulf] Lustre Upgrades

Nah, that ain't large scale ;-) If you want large scale have a look at snowmobile:


They drive a 45-foot truck to your data centre, fill it up with your data bits, then drive it back to their data centre :-()


On 24/07/18 19:04, Jonathan Engwall wrote:
> Snowball is the very large scale AWS data service.
> On July 24, 2018, at 8:35 AM, Joe Landman <joe.landman at gmail.com> wrote:
> On 07/24/2018 11:06 AM, John Hearns via Beowulf wrote:
>> Joe, sorry to split the thread here. I like BeeGFS and have set it up.
>> I have worked for two companies now who have sites around the world, 
>> those sites being independent research units. But HPC facilities are 
>> in headquarters.
>> The sites want to be able to drop files onto local storage yet have 
>> it magically appear on HPC storage, and same with the results going 
>> back the other way.
>> One company did this well with GPFS and AFM volumes.
>> For the current company, I looked at gluster and Gluster 
>> geo-replication is one way only.
>> What do you know of the BeeGFS mirroring? Will it work over long 
>> distances? (Note to me - find out yourself you lazy besom)
> This isn't the use case for most/all cluster file systems.   This is 
> where distributed object systems and buckets rule.
> Take your file, dump it into an S3 like bucket on one end, pull it out 
> of the S3 like bucket on the other.  If you don't want to use get/put 
> operations, then use s3fs/s3ql.  You can back this up with replicating 
> EC minio stores (will take a few minutes to set up ... compare that to 
> others).
> The down side to this is that minio has limits of about 16TiB last I 
> checked.   If you need more, replace minio with another system 
> (igneous, ceph, etc.).  Ping me offline if you want to talk more.
> [...]
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

More information about the Beowulf mailing list