<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div><br></div>M,<div><br></div><div>I have no intention of sharing what I have used, I was just remarking on what is current relative to the obsolete downloadable Fermi version and how one might get it ... and yes, get it, under NDA.</div><div><br></div><div>R</div><div><br><br><div id="AppleMailSignature" dir="ltr">Sent from my iPhone</div><div dir="ltr"><br>On Aug 14, 2019, at 11:14 PM, Matt Wallis <<a href="mailto:mattw@madmonks.org">mattw@madmonks.org</a>> wrote:<br><br></div><blockquote type="cite"><div dir="ltr"><meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div class=""><br class=""></div><div class="">NVIDIA has a version of HPL floating around, but will only supply it under NDA, and you’re definitely not allowed to share the version you have. Not that that doesn’t happen of course, but NVIDIA would definitely prefer you didn’t.</div><div class=""><br class=""></div><div class="">Matt.</div><br class=""><div class="">
<div dir="auto" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div style="color: rgb(0, 0, 0); font-family: Inconsolata; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;">— </div><div style="color: rgb(0, 0, 0); font-family: Inconsolata; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;">Matt Wallis</div><div style="color: rgb(0, 0, 0); font-family: Inconsolata; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;"><a href="mailto:mattw@madmonks.org" class="">mattw@madmonks.org</a></div><div style="color: rgb(0, 0, 0); font-family: Inconsolata; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class=""><br class=""></div><br class="Apple-interchange-newline"></div><br class="Apple-interchange-newline">
</div>
<div><br class=""><blockquote type="cite" class=""><div class="">On 15 Aug 2019, at 07:42, Richard Walsh <<a href="mailto:rbwcnslt@gmail.com" class="">rbwcnslt@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class=""><br class=""><div class="">You have to talk to the right people at NVIDIA ... benchmarking group.</div><div class=""><br class=""></div><div class="">The version I am using from 2018 is:</div><div class=""><br class=""></div><div class=""> xhpl_cuda9.2.88_mkl_2018_ompi_3.1.0_gcc485_sm35_sm60_sm70_5_
18_18</div><div class=""><br class=""></div><div class="">but there must be something more current than that now. This one works out<br class=""></div><div class="">through the V100 as the name implies.</div><div class=""><br class=""></div><div class="">rbw</div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Aug 14, 2019 at 12:31 PM Michael Di Domenico <<a href="mailto:mdidomenico4@gmail.com" class="">mdidomenico4@gmail.com</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">yeah, i'm not surprised. the original developer did it as part of a<br class="">
research project i believe. it was never updated after he wrote the<br class="">
paper and published.<br class="">
<br class="">
i wish nvidia would release the hpcg benchmark source code. all you<br class="">
can get is the precompiled binaries which i can't use. i've asked my<br class="">
contacts, but i guess i don't have enough sway. :(<br class="">
<br class="">
On Wed, Aug 14, 2019 at 1:12 PM Prentice Bisbal via Beowulf<br class="">
<<a href="mailto:beowulf@beowulf.org" target="_blank" class="">beowulf@beowulf.org</a>> wrote:<br class="">
><br class="">
> I looked into this further, and the version available is for the Fermi<br class="">
> GPUs, and doesn't really work with the V100 GPUs we have in our target<br class="">
> system. :(<br class="">
><br class="">
> --<br class="">
> Prentice<br class="">
><br class="">
> On 8/14/19 11:17 AM, Michael Di Domenico wrote:<br class="">
> > gpu linpack from nvidia is available via the developer portal. but<br class="">
> > you can probably also reach out to the developer directly, he's<br class="">
> > friendly.<br class="">
> ><br class="">
> > though they haven't updated it in a long time, not sure if it still<br class="">
> > runs on the newer cards.<br class="">
> ><br class="">
> > On Wed, Aug 14, 2019 at 11:15 AM Prentice Bisbal via Beowulf<br class="">
> > <<a href="mailto:beowulf@beowulf.org" target="_blank" class="">beowulf@beowulf.org</a>> wrote:<br class="">
> >> I have a new GPU-based cluster that I'd like to bench mark with<br class="">
> >> High-Performance LINPACK (HPL). Does anyone know where I can get a<br class="">
> >> version of HPL written for GPUs? I know NVIDIA has a version, and I've<br class="">
> >> already reached out to NVIDIA to see if they'll share their version with<br class="">
> >> me. I just figured I'd ask here, in case there are other sources.<br class="">
> >><br class="">
> >> --<br class="">
> >> Prentice<br class="">
> >><br class="">
> >> _______________________________________________<br class="">
> >> Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank" class="">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br class="">
> >> To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" rel="noreferrer" target="_blank" class="">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br class="">
> > _______________________________________________<br class="">
> > Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank" class="">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br class="">
> > To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" rel="noreferrer" target="_blank" class="">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br class="">
> _______________________________________________<br class="">
> Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank" class="">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br class="">
> To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" rel="noreferrer" target="_blank" class="">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br class="">
_______________________________________________<br class="">
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank" class="">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br class="">
To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" rel="noreferrer" target="_blank" class="">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br class="">
</blockquote></div>
_______________________________________________<br class="">Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" class="">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br class="">To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" class="">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br class=""></div></blockquote></div><br class=""></div></blockquote></div></body></html>