<html><head><meta http-equiv="Content-Type" content="text/html charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class="">Thanks everyone! Your replies were very helpful.</div><br class=""><div><blockquote type="cite" class=""><div class=""><br class=""></div><div class=""><div dir="auto" style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><br class=""><div style="direction: ltr;" class=""><blockquote type="cite" class=""><div class="">On Mar 8, 2016, at 2:49 PM, Christopher Samuel <<a href="mailto:samuel@unimelb.edu.au" class="">samuel@unimelb.edu.au</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">On 08/03/16 15:43, Jeff Friedman wrote:<br class=""><br class=""><blockquote type="cite" class="">Hello all. I am just entering the HPC Sales Engineering role, and would<br class="">like to focus my learning on the most relevant stuff. I have searched<br class="">near and far for a current survey of some sort listing the top used<br class="">“stacks”, but cannot seem to find one that is free. I was breaking<br class="">things down similar to this:<br class=""></blockquote><br class="">All the following is just for us, but in your role you'll probably need<br class="">to be familiar with most options I would have thought based on customer<br class="">requirements. Specialisation for your preferred suite is down to you of<br class="">course!<br class=""><br class=""><blockquote type="cite" class="">_OS disto_: CentOS, Debian, TOSS, etc? I know some come trimmed down,<br class="">and also include specific HPC libraries, like CNL, CNK, INK? <br class=""></blockquote><br class="">RHEL - hardware support attitude of "we support both types of Linux,<br class="">RHEL and SLES".<br class=""><br class=""><blockquote type="cite" class="">_MPI options_: MPICH2, MVAPICH2, Open MPI, Intel MPI, ? <br class=""></blockquote><br class="">Open-MPI<br class=""><br class=""><blockquote type="cite" class="">_Provisioning software_: Cobbler, Warewulf, xCAT, Openstack, Platform HPC, ?<br class=""></blockquote><br class="">xCAT<br class=""><br class=""><blockquote type="cite" class="">_Configuration management_: Warewulf, Puppet, Chef, Ansible, ? <br class=""></blockquote><br class="">xCAT<br class=""><br class="">We use Puppet on for infrastructure VMs (running Debian).<br class=""><br class=""><blockquote type="cite" class="">_Resource and job schedulers_: I think these are basically the same<br class="">thing? Torque, Lava, Maui, Moab, SLURM, Grid Engine, Son of Grid Engine,<br class="">Univa, Platform LSF, etc… others?<br class=""></blockquote><br class="">Yes and no - we run Slurm and use its own scheduling mechanisms but you<br class="">could plug in Moab should you wish.<br class=""><br class="">Torque has an example pbs_sched but that's just a FIFO, you'd want to<br class="">look at Maui or Moab for more sophisticated scheduling.<br class=""><br class=""><blockquote type="cite" class="">_Shared filesystems_: NFS, pNFS, Lustre, GPFS, PVFS2, GlusterFS, ? <br class=""></blockquote><br class="">GPFS here - copes well with lots of small files (looks at one OpenFOAM<br class="">project that has over 19 million files & directories - mostly<br class="">directories - and sighs).<br class=""><br class=""><blockquote type="cite" class="">_Library management_: Lmod, ? <br class=""></blockquote><br class="">I've been using environment modules for almost a decade now but our<br class="">recent cluster has switched to Lmod.<br class=""><br class=""><blockquote type="cite" class="">_Performance monitoring_: Ganglia, Nagios, ?<br class=""></blockquote><br class="">We use Icinga for monitoring infrastructure, including polling xCAT and<br class="">Slurm for node information such as error LEDs, down nodes, etc.<br class=""><br class="">We have pnp4nagios integrated with our Icinga to record time series<br class="">information about memory usage, etc.<br class=""><br class=""><blockquote type="cite" class="">_Cluster management toolkits_: I believe these perform many of the<br class="">functions above, all wrapped up in one tool? Rocks, Oscar, Scyld, Bright, ?<br class=""></blockquote><br class="">N/A here.<br class=""><br class="">All the best!<br class="">Chris<br class="">-- <br class=""> Christopher Samuel Senior Systems Administrator<br class=""> VLSCI - Victorian Life Sciences Computation Initiative<br class=""> Email: <a href="mailto:samuel@unimelb.edu.au" class="">samuel@unimelb.edu.au</a> Phone: +61 (0)3 903 55545<br class=""> <a href="http://www.vlsci.org.au/" class="">http://www.vlsci.org.au/</a> <a href="http://twitter.com/vlsci" class="">http://twitter.com/vlsci</a><br class=""><br class="">_______________________________________________<br class="">Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" class="">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br class="">To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" class="">http://www.beowulf.org/mailman/listinfo/beowulf</a><br class=""></div></div></blockquote></div><br class=""></div></div></blockquote></div><br class=""></body></html>