[Beowulf] GPFS question
lance.wilson at monash.edu
Mon Apr 29 14:59:06 PDT 2019
Our 1Pb array took up to 3 days when we did this. It might be faster for
you but it took a very long time with little to no indication of how long
it would take. Just a word of caution though, we didn’t do an offline scan
once too long and that scan took much longer than previously. Good luck!
On Tue, 30 Apr 2019 at 7:34 am, Jörg Saßmannshausen <
sassy-work at sassy.formativ.net> wrote:
> Dear all,
> just a quick question regarding GPFS:
> we are running a 9 PB GPFS storage space at work of which around 6 -7 PB
> used. It is a single file system but with different file-sets installed on
> During our routine checks we found that:
> $ mmhealth node show -n all
> reports this problem:
> (where FOO is being the file system).
> Our vendor suggested to do an online check:
> $ mmfsck FOO -o -y
> which is still running.
> Today the vendor suggested to take the GPFS file system offline and run
> the above
> command without the -o option, which would lead to an outage.
> So my simply question is: has anybody ever done that on such a large file
> and how long roughly would that take? Every time I am asking this question
> get told: a long time!
> Our vendor told us we could use for example
> --threads 128
> as oppose to the normally used 16 threads, so I am aware my mileage will
> here a bit, but I would just like a guestimate of the time.
> Many thanks for your help here!
> All the best from London
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
Dr Lance Wilson
Characterisation Virtual Laboratory (CVL) Coordinator &
Senior HPC Consultant
Ph: 03 99055942 (+61 3 99055942)
Mobile: 0437414123 (+61 4 3741 4123)
Multi-modal Australian ScienceS Imaging and Visualisation Environment
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Beowulf