<div dir="ltr">;-)</div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Apr 29, 2017 at 1:12 PM, John Hanks <span dir="ltr"><<a href="mailto:griznog@gmail.com" target="_blank">griznog@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Thanks for the suggestions, but when this Phoenix rises from the ashes it will be running BeeGFS over ZFS. The more I learn about GPFS the more I am reminded of quote seen recently on twitter:<br><div><br>"People bred, selected, and compensated to find complicated solutions do not have an incentive to implement simplified ones." -- <a href="https://twitter.com/nntaleb" target="_blank">@nntaleb</a><br><br>You can only read "you should contact support" so many times in documentation and forum posts before you remember "oh yeah, IBM is a _services_ company." </div><div><br></div><div>jbh<div><div class="h5"><br><br>On Sat, Apr 29, 2017 at 8:58 PM Evan Burness <<a href="mailto:evan.burness@cyclecomputing.com" target="_blank">evan.burness@cyclecomputing.<wbr>com</a>> wrote:<br></div></div></div><div><div class="h5"><div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi John,<div><br></div><div>Yeah, I think the best word here is "ouch" unfortunately. I asked a few of my GPFS-savvy colleagues and they all agreed there aren't many good options here.</div><div><br></div><div>The one "suggestion" (I promise, no Monday morning quarterbacking) I and my storage admins friends can offer, if you have the ability to do so (both from a technical but also from a procurement/policy change standpoint) is to swap out spinning drives for NVMe ones for your metadata servers. Yes, you'll still take the write performance hit from replication relative to a non-replicated state, but modern NAND and NVMe drives are so fast and low latency that it will still be as fast or faster than the replicated, spinning disk approach it sounds like (please forgive me if I'm misunderstanding this piece).</div><div><br></div><div>We took this very approach on a 10+ petabyte DDN SFA14k running GPFS 4.2.1 that was designed to house research and clinical data for a large US hospital. They had 600+ million files b/t 0-10 MB, so we had high-end requirements for both metadata performance AND reliability. Like you, we tagged 4 GPFS NSD's with metadata duty and gave each a 1.6 TB Intel P3608 NVMe disk, and the performance was still exceptionally good even with replication because these modern drives are such fire-breathing IOPS monsters. If you don't have as much data as this scenario, you could definitely get away with 400 or 800 GB versions and save yourself a fair amount of $$.</div><div><br></div><div>Also, if you're looking to experiment with whether a replicated approach can meet your needs, I suggest you check out AWS' I3 instances for short-term testing. They have up to 8 * 1.9 TB NVMe drives. At Cycle Computing we've helped a number of .com's and .edu's address high-end IO needs using these or similar instances. If you have a decent background with filesystems these cloud instances can be excellent performers, either for test/lab scenarios like this or production environments.</div><div><br></div><div>Hope this helps!</div><div><br></div><div><br></div><div>Best,</div><div><br></div><div>Evan Burness</div></div><div dir="ltr"><div><br></div><div><div style="font-size:12.8px">-------------------------</div><div style="font-size:12.8px"><font size="1">Evan Burness</font></div><div style="font-size:12.8px"><font size="1">Director, HPC</font></div><div style="font-size:12.8px"><font size="1">Cycle Computing</font></div><div style="font-size:12.8px"><a href="mailto:evan.burness@cyclecomputing.com" target="_blank"><font size="1">evan.burness@cyclecomputing.<wbr>com</font></a></div><div style="font-size:12.8px"><font size="1"><a href="tel:(919)%20724-9338" value="+19197249338" target="_blank">(919) 724-9338</a></font></div></div><div><br></div><div><br></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Apr 29, 2017 at 11:13 AM, John Hanks <span dir="ltr"><<a href="mailto:griznog@gmail.com" target="_blank">griznog@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">There are no dumb questions in this snafu, I have already covered the dumb aspects adequately :)</div><span>
</span><p dir="ltr">Replication was not enabled, this was scratch space set up to be as large and fast as possible. The fact that I can say "it was scratch" doesn't make it sting less, thus the grasping at straws. </p><span></span>jbh<div class="m_-2742196184209609822m_5705911575897551315HOEnZb"><div class="m_-2742196184209609822m_5705911575897551315h5"><div><br><div class="gmail_quote"><div dir="ltr">On Sat, Apr 29, 2017, 7:05 PM Evan Burness <<a href="mailto:evan.burness@cyclecomputing.com" target="_blank">evan.burness@cyclecomputing.<wbr>com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi John,<div><br></div><div>I'm not a GPFS expert, but I did manage some staff that ran GPFS filesystems while I was at NCSA. Those folks reeeaaalllly knew what they were doing.</div><div><br></div><div>Perhaps a dumb question, but should we infer from your note that metadata replication is not enabled across those 4 NSDs handling it?</div><div><br></div><div><br></div><div>Best,</div><div><br></div><div>Evan</div><div><br></div><div><br></div><div>-------------------------</div><div><font size="1">Evan Burness</font></div><div><font size="1">Director, HPC</font></div><div><font size="1">Cycle Computing</font></div><div><a href="mailto:evan.burness@cyclecomputing.com" target="_blank"><font size="1">evan.burness@cyclecomputing.<wbr>com</font></a></div><div><font size="1"><a href="tel:(919)%20724-9338" value="+19197249338" target="_blank">(919) 724-9338</a></font></div></div><div class="gmail_extra"></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Apr 29, 2017 at 9:36 AM, Peter St. John <span dir="ltr"><<a href="mailto:peter.st.john@gmail.com" target="_blank">peter.st.john@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">just a friendly reminder that while the probability of a particular coincidence might be very low, the probability that there will be **some** coincidence is very high.<div><br></div><div>Peter (pedant)</div></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="m_-2742196184209609822m_5705911575897551315m_-4270104721296845677m_4970974869590099511m_-1652594465904345742h5">On Sat, Apr 29, 2017 at 3:00 AM, John Hanks <span dir="ltr"><<a href="mailto:griznog@gmail.com" target="_blank">griznog@gmail.com</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="m_-2742196184209609822m_5705911575897551315m_-4270104721296845677m_4970974869590099511m_-1652594465904345742h5"><div dir="ltr">Hi,<div><br></div><div>I'm not getting much useful vendor information so I thought I'd ask here in the hopes that a GPFS expert can offer some advice. We have a GPFS system which has the following disk config:</div><div><br></div><div><div>[root@grsnas01 ~]# mmlsdisk grsnas_data</div><div>disk driver sector failure holds holds storage</div><div>name type size group metadata data status availability pool</div><div>------------ -------- ------ ----------- -------- ----- ------------- ------------ ------------</div><div>SAS_NSD_00 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_01 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_02 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_03 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_04 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_05 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_06 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_07 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_08 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_09 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_10 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_11 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_12 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_13 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_14 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_15 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_16 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_17 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_18 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_19 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_20 nsd 512 100 No Yes ready up system</div><div>SAS_NSD_21 nsd 512 100 No Yes ready up system</div><div>SSD_NSD_23 nsd 512 200 Yes No ready up system</div><div>SSD_NSD_24 nsd 512 200 Yes No ready up system</div><div>SSD_NSD_25 nsd 512 200 Yes No to be emptied down system</div><div>SSD_NSD_26 nsd 512 200 Yes No ready up system</div><div><br></div></div><div>SSD_NSD_25 is a mirror in which both drives have failed due to a series of unfortunate events and will not be coming back. From the GPFS troubleshooting guide it appears that my only alternative is to run </div><div>
<p class="m_-2742196184209609822m_5705911575897551315m_-4270104721296845677m_4970974869590099511m_-1652594465904345742m_532399161096765728m_-9194302790648927810inbox-inbox-p1">mmdeldisk grsnas_data SSD_NSD_25 -p</p><p class="m_-2742196184209609822m_5705911575897551315m_-4270104721296845677m_4970974869590099511m_-1652594465904345742m_532399161096765728m_-9194302790648927810inbox-inbox-p1">around which the documentation also warns is irreversible, the sky is likely to fall, dogs and cats sleeping together, etc. But at this point I'm already in an irreversible situation. Of course this is a scratch filesystem, of course people were warned repeatedly about the risk of using a scratch filesystem that is not backed up and of course many ignored that. I'd like to recover as much as possible here. Can anyone confirm/reject that deleting this disk is the best way forward or if there are other alternatives to recovering data from GPFS in this situation?</p><p class="m_-2742196184209609822m_5705911575897551315m_-4270104721296845677m_4970974869590099511m_-1652594465904345742m_532399161096765728m_-9194302790648927810inbox-inbox-p1">Any input is appreciated. Adding salt to the wound is that until a few months ago I had a complete copy of this filesystem that I had made onto some new storage as a burn-in test but then removed as that storage was consumed... As they say, sometimes you eat the bear, and sometimes, well, the bear eats you.</p><p class="m_-2742196184209609822m_5705911575897551315m_-4270104721296845677m_4970974869590099511m_-1652594465904345742m_532399161096765728m_-9194302790648927810inbox-inbox-p1">Thanks,</p><p class="m_-2742196184209609822m_5705911575897551315m_-4270104721296845677m_4970974869590099511m_-1652594465904345742m_532399161096765728m_-9194302790648927810inbox-inbox-p1">jbh</p><p class="m_-2742196184209609822m_5705911575897551315m_-4270104721296845677m_4970974869590099511m_-1652594465904345742m_532399161096765728m_-9194302790648927810inbox-inbox-p1">(Naively calculated probability of these two disks failing close together in this array: 0.00001758. I never get this lucky when buying lottery tickets.)</p></div></div><span class="m_-2742196184209609822m_5705911575897551315m_-4270104721296845677m_4970974869590099511m_-1652594465904345742m_532399161096765728HOEnZb"><font color="#888888"><div dir="ltr">-- <br></div><div data-smartmail="gmail_signature"><div dir="ltr"><div>‘[A] talent for following the ways of yesterday, is not sufficient to improve the world of today.’</div><div> - King Wu-Ling, ruler of the Zhao state in northern China, 307 BC</div></div></div>
</font></span><br></div></div>______________________________<wbr>_________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/<wbr>mailman/listinfo/beowulf</a><br>
<br></blockquote></div><br></div>
<br>______________________________<wbr>_________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/<wbr>mailman/listinfo/beowulf</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div></div><div class="gmail_extra">-- <br><div class="m_-2742196184209609822m_5705911575897551315m_-4270104721296845677m_4970974869590099511m_-1652594465904345742gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>Evan Burness</div><div>Director, HPC Solutions</div><div>Cycle Computing</div><div><a href="mailto:evan.burness@cyclecomputing.com" target="_blank">evan.burness@cyclecomputing.<wbr>com</a></div><div><a href="tel:(919)%20724-9338" value="+19197249338" target="_blank">(919) 724-9338</a></div></div></div>
</div></blockquote></div></div><div dir="ltr">-- <br></div><div data-smartmail="gmail_signature"><div dir="ltr"><div>‘[A] talent for following the ways of yesterday, is not sufficient to improve the world of today.’</div><div> - King Wu-Ling, ruler of the Zhao state in northern China, 307 BC</div></div></div>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="m_-2742196184209609822m_5705911575897551315gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>Evan Burness</div><div>Director, HPC Solutions</div><div>Cycle Computing</div><div><a href="mailto:evan.burness@cyclecomputing.com" target="_blank">evan.burness@cyclecomputing.<wbr>com</a></div><div><a href="tel:(919)%20724-9338" value="+19197249338" target="_blank">(919) 724-9338</a></div></div></div>
</div>
</blockquote></div></div></div></div></div><div class="HOEnZb"><div class="h5"><div dir="ltr">-- <br></div><div data-smartmail="gmail_signature"><div dir="ltr"><div>‘[A] talent for following the ways of yesterday, is not sufficient to improve the world of today.’</div><div> - King Wu-Ling, ruler of the Zhao state in northern China, 307 BC</div></div></div>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>Evan Burness</div><div>Director, HPC Solutions</div><div>Cycle Computing</div><div><a href="mailto:evan.burness@cyclecomputing.com" target="_blank">evan.burness@cyclecomputing.com</a></div><div>(919) 724-9338</div></div></div>
</div>