<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>a) For technically knowledgable audience, you could demonstrate
      some simple benchmarks codes.</p>
    <p>b) For a more general audience, you might also look for some
      parallel open source applications that are specific to the domain
      of interest. For example for engineering, OpenFOAM
      (<a class="moz-txt-link-freetext" href="https://openfoam.com/">https://openfoam.com/</a>) has some demo setups that can give good
      motivation. FENICS (<a class="moz-txt-link-freetext" href="https://fenicsproject.org/">https://fenicsproject.org/</a>) and DEAL.II
      (<a class="moz-txt-link-freetext" href="https://www.dealii.org/">https://www.dealii.org/</a>) also have some things. If Oil and Gas
      industry SpecFEM3D
      (<a class="moz-txt-link-freetext" href="https://geodynamics.org/cig/software/specfem3d/">https://geodynamics.org/cig/software/specfem3d/</a>). If
      computational chemistry NWChem
      (<a class="moz-txt-link-freetext" href="http://www.nwchem-sw.org/index.php/Main_Page">http://www.nwchem-sw.org/index.php/Main_Page</a>) might be nice.
      There are of course many other codes as well. If the cluster is
      mostly used as a task farm to run many single node jobs rather
      than for a large parallel application, some embarrassingly
      parallel task is fine
      (<a class="moz-txt-link-freetext" href="https://en.wikipedia.org/wiki/Embarrassingly_parallel">https://en.wikipedia.org/wiki/Embarrassingly_parallel</a>), ideally
      setup so that upper management can execute it. One such
      application is Mitsuba (<a class="moz-txt-link-freetext" href="https://www.mitsuba-renderer.org/">https://www.mitsuba-renderer.org/</a>), some
      measurements for Mitsuba on a small 2 node setup can be found at
<a class="moz-txt-link-freetext" href="https://courses.cs.ut.ee/MTAT.08.037/2015_spring/uploads/Main/Martoja.pdf">https://courses.cs.ut.ee/MTAT.08.037/2015_spring/uploads/Main/Martoja.pdf</a><br>
    </p>
    <p>Probably most useful is to find out users of the cluster and have
      one or two of them explain how you have/will help improve company
      profitability by making their workflow more effective. <br>
    </p>
    <div class="moz-cite-prefix">On 1/14/20 10:54 PM, Scott Atchley
      wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:CAL8g0jJm7PK13Ts6c0rPzcuZr8_T=hASG_NxGV9h+=KR=wtH3Q@mail.gmail.com">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <div dir="ltr">Yes, we have built a few of them. We have one here,
        one at AMSE, and one that travels to schools in one of our
        traveling science trailers.</div>
      <br>
      <div class="gmail_quote">
        <div dir="ltr" class="gmail_attr">On Tue, Jan 14, 2020 at 10:29
          AM John McCulloch <<a href="mailto:johnm@pcpcdirect.com"
            moz-do-not-send="true">johnm@pcpcdirect.com</a>> wrote:<br>
        </div>
        <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
          <div lang="EN-US">
            <div class="gmail-m_-5851884763049962181WordSection1">
              <p class="MsoNormal">Hey Scott, I think I saw an exhibit
                like what you’re describing at the AMSE when I was on a
                project in Oak Ridge. Was that it?</p>
              <p class="MsoNormal"> </p>
              <p class="MsoNormal"><span
                  style="font-family:Arial,sans-serif;color:black">John
                  McCulloch | PCPC Direct, Ltd. | desk 713-344-0923</span></p>
              <p class="MsoNormal"> </p>
              <p class="MsoNormal"><b>From:</b> Scott Atchley <<a
                  href="mailto:e.scott.atchley@gmail.com"
                  target="_blank" moz-do-not-send="true">e.scott.atchley@gmail.com</a>>
                <br>
                <b>Sent:</b> Tuesday, January 14, 2020 7:19 AM<br>
                <b>To:</b> John McCulloch <<a
                  href="mailto:johnm@pcpcdirect.com" target="_blank"
                  moz-do-not-send="true">johnm@pcpcdirect.com</a>><br>
                <b>Cc:</b> <a href="mailto:beowulf@beowulf.org"
                  target="_blank" moz-do-not-send="true">beowulf@beowulf.org</a><br>
                <b>Subject:</b> Re: [Beowulf] HPC demo</p>
              <p class="MsoNormal"> </p>
              <div>
                <p class="MsoNormal">We still have <a
                    href="https://tinytitan.github.io" target="_blank"
                    moz-do-not-send="true">Tiny Titan</a> even though
                  Titan is gone. It allows users to toggle processors on
                  and off and the display has a mode where the "water"
                  is colored coded by the processor, which has a
                  corresponding light. You can see the frame rate go up
                  as you add processors and the motion becomes much more
                  fluid.</p>
              </div>
              <p class="MsoNormal"> </p>
              <div>
                <div>
                  <p class="MsoNormal">On Mon, Jan 13, 2020 at 7:35 PM
                    John McCulloch <<a
                      href="mailto:johnm@pcpcdirect.com" target="_blank"
                      moz-do-not-send="true">johnm@pcpcdirect.com</a>>
                    wrote:</p>
                </div>
                <blockquote style="border-style:none none none
solid;border-left-width:1pt;border-left-color:rgb(204,204,204);padding:0in
                  0in 0in 6pt;margin-left:4.8pt;margin-right:0in">
                  <div>
                    <div>
                      <p class="MsoNormal">I recently inherited
                        management of a cluster and my knowledge is
                        limited to a bit of Red Hat. I need to figure
                        out a demo for upper management graphically
                        demonstrating the speed up of running a parallel
                        app on one x86 node versus multiple nodes up to
                        36. They have dual Gold 6132 procs and Mellanox
                        EDR interconnect. Any suggestions would be
                        appreciated.</p>
                      <p class="MsoNormal"> </p>
                      <p class="MsoNormal">Respectfully,</p>
                      <p class="MsoNormal"><span
                          style="font-family:Arial,sans-serif;color:black">John
                          McCulloch | PCPC Direct, Ltd.</span></p>
                      <p class="MsoNormal"> </p>
                    </div>
                  </div>
                  <p class="MsoNormal">_______________________________________________<br>
                    Beowulf mailing list, <a
                      href="mailto:Beowulf@beowulf.org" target="_blank"
                      moz-do-not-send="true">Beowulf@beowulf.org</a>
                    sponsored by Penguin Computing<br>
                    To change your subscription (digest mode or
                    unsubscribe) visit <a
                      href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf"
                      target="_blank" moz-do-not-send="true">
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a></p>
                </blockquote>
              </div>
            </div>
          </div>
        </blockquote>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <pre class="moz-quote-pre" wrap="">_______________________________________________
Beowulf mailing list, <a class="moz-txt-link-abbreviated" href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit <a class="moz-txt-link-freetext" href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a>
</pre>
    </blockquote>
  </body>
</html>