<div><div dir="auto">Hi <span style="color:rgb(49,49,49);word-spacing:1px">Jörg,</span></div></div><div dir="auto"><span style="color:rgb(49,49,49);word-spacing:1px">Our 1Pb array took up to 3 days when we did this. It might be faster for you but it took a very long time with little to no indication of how long it would take. Just a word of caution though, we didn’t do an offline scan once too long and that scan took much longer than previously. Good luck!</span></div><div dir="auto"><span style="color:rgb(49,49,49);word-spacing:1px"><br></span></div><div dir="auto"><span style="color:rgb(49,49,49);word-spacing:1px">Lance</span></div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, 30 Apr 2019 at 7:34 am, Jörg Saßmannshausen <<a href="mailto:sassy-work@sassy.formativ.net">sassy-work@sassy.formativ.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Dear all,<br>
<br>
just a quick question regarding GPFS:<br>
we are running a 9 PB GPFS storage space at work of which around 6 -7 PB are <br>
used. It is a single file system but with different file-sets installed on it.<br>
During our routine checks we found that:<br>
$ mmhealth node show -n all<br>
reports this problem:<br>
<br>
fserrinvalid(FOO)<br>
<br>
(where FOO is being the file system).<br>
<br>
Our vendor suggested to do an online check:<br>
<br>
$ mmfsck FOO -o -y<br>
<br>
which is still running. <br>
Today the vendor suggested to take the GPFS file system offline and run the above <br>
command without the -o option, which would lead to an outage. <br>
<br>
So my simply question is: has anybody ever done that on such a large file set <br>
and how long roughly would that take? Every time I am asking this question I <br>
get told: a long time! <br>
Our vendor told us we could use for example <br>
--threads 128<br>
as oppose to the normally used 16 threads, so I am aware my mileage will vary <br>
here a bit, but I would just like a guestimate of the time. <br>
<br>
Many thanks for your help here!<br>
<br>
All the best from London<br>
<br>
Jörg<br>
<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br>
</blockquote></div></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr">Cheers,<br><br>Lance<br>--<br>Dr Lance Wilson<br>Characterisation Virtual Laboratory (CVL) Coordinator &</div><div dir="ltr">Senior HPC Consultant</div><div>Ph: 03 99055942 (+61 3 99055942)</div><div dir="ltr">Mobile: 0437414123 (+61 4 3741 4123)</div><div dir="ltr">Multi-modal Australian ScienceS Imaging and Visualisation Environment<br>(<a href="http://www.massive.org.au/" rel="noreferrer" style="color:rgb(17,85,204)" target="_blank">www.massive.org.au</a>)<br>Monash University<br></div></div></div></div></div></div></div></div>