<div dir="ltr"><div>We installed the kernel updates when they became available. Fortunately we were a little slower on the firmware updates, and managed to rollback the few we did apply that introduced instability. We're a bioinformatics shop (data parallel, lots of disk I/O mostly to GPFS, few-to-no cross-communication between nodes), and actually had some jobs start running faster, though the group running them came back to us to report that they had taken advantage of the maintenance window to make some tweaks to their pipeline.<br><br></div><div>That's sort of a long way of saying YMMV.<br></div><div><br></div>Skylar<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 8, 2018 at 10:10 AM, Prentice Bisbal <span dir="ltr"><<a href="mailto:pbisbal@pppl.gov" target="_blank">pbisbal@pppl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Beowulfers,<br>
<br>
Have any of you updated the kernels on your clusters to fix the Spectre and Meltdown vulnerabilities? I was following this issue closely for the first couple of weeks. There seemed to be a lack of consensus on how much these fixed would impact HPC jobs, and if I recall correctly, some of the patches really hurt performance, or caused other problems. We took a wait-and-see approach here. So now that I've waited a while, what did you see?<span class="HOEnZb"><font color="#888888"><br>
<br>
-- <br>
Prentice<br>
<br>
______________________________<wbr>_________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman<wbr>/listinfo/beowulf</a><br>
</font></span></blockquote></div><br></div>