<div dir="ltr">In general if you have a snowflake you need to take some steps.<div>1. Unrack and remove it from the population</div><div>2. Image, document the system</div><div>3. Sniff test, visual test, power on fans spinning test in a lab</div><div>4. Understand that it is ok for one system out of X (where X could be 1000) can fail</div><div>5. Return the system to rack if drive/image replacement resolves issue</div><div>6. Return system to supplier if above fails</div><div>7. Keep moving, don't spend the hours that equate to the cost of the node troubleshooting it unless capital budget is super tricky</div><div>8. Keep dialog with supplier all the time to say that everything is awesome so they are interested in the change of status</div><div>9. Don't troubleshoot in production ever....</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Aug 10, 2017 at 9:39 AM, Faraz Hussain <span dir="ltr"><<a href="mailto:info@feacluster.com" target="_blank">info@feacluster.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">One of our compute nodes runs ~30% slower than others. It has the exact same image so I am baffled why it is running slow . I have tested OMP and MPI benchmarks. Everything runs slower. The cpu usage goes to 2000%, so all looks normal there.<br>
<br>
I thought it may have to do with cpu scaling, i.e when the kernel changes the cpu speed depending on the workload. But we do not have that enabled on these machines.<br>
<br>
Here is a snippet from "cat /proc/cpuinfo". Everything is identical to our other nodes. Any suggestions on what else to check? I have tried rebooting it.<br>
<br>
processor : 19<br>
vendor_id : GenuineIntel<br>
cpu family : 6<br>
model : 62<br>
model name : Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz<br>
stepping : 4<br>
cpu MHz : 2500.098<br>
cache size : 25600 KB<br>
physical id : 1<br>
siblings : 10<br>
core id : 12<br>
cpu cores : 10<br>
apicid : 56<br>
initial apicid : 56<br>
fpu : yes<br>
fpu_exception : yes<br>
cpuid level : 13<br>
wp : yes<br>
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms<br>
bogomips : 5004.97<br>
clflush size : 64<br>
cache_alignment : 64<br>
address sizes : 46 bits physical, 48 bits virtual<br>
power management:<br>
<br>
<br>
<br>
______________________________<wbr>_________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman<wbr>/listinfo/beowulf</a><br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr">- Andrew "lathama" Latham <a href="mailto:lathama@gmail.com" target="_blank">lathama@gmail.com</a> <a href="http://lathama.org" target="_blank">http://lathama.com</a> -</div></div></div></div>
</div>