<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Hello all. I am just entering the HPC Sales Engineering role, and would like to focus my learning on the most relevant stuff. I have searched near and far for a current survey of some sort listing the top used “stacks”, but cannot seem to find one that is free. I was breaking things down similar to this:<div class=""><br class=""></div><div class=""><u class="">OS disto</u>: CentOS, Debian, TOSS, etc? I know some come trimmed down, and also include specific HPC libraries, like CNL, CNK, INK? </div><div class=""><br class=""></div><div class=""><u class="">MPI options</u>: MPICH2, MVAPICH2, Open MPI, Intel MPI, ? </div><div class=""><br class=""></div><div class=""><u class="">Provisioning software</u>: Cobbler, Warewulf, xCAT, Openstack, Platform HPC, ?</div><div class=""><br class=""></div><div class=""><u class="">Configuration management</u>: Warewulf, Puppet, Chef, Ansible, ? </div><div class=""><br class=""></div><div class=""><u class="">Resource and job schedulers</u>: I think these are basically the same thing? Torque, Lava, Maui, Moab, SLURM, Grid Engine, Son of Grid Engine, Univa, Platform LSF, etc… others?</div><div class=""><br class=""></div><div class=""><u class="">Shared filesystems</u>: NFS, pNFS, Lustre, GPFS, PVFS2, GlusterFS, ? </div><div class=""><br class=""></div><div class=""><u class="">Library management</u>: Lmod, ? </div><div class=""><br class=""></div><div class=""><u class="">Performance monitoring</u>: Ganglia, Nagios, ?</div><div class=""><br class=""></div><div class=""><u class="">Cluster management toolkits</u>: I believe these perform many of the functions above, all wrapped up in one tool? Rocks, Oscar, Scyld, Bright, ?</div><div class=""><br class=""></div><div class=""><br class=""></div><div class="">Does anyone have any observations as to which of the above are the most common? Or is that too broad? I believe most the clusters I will be involved with will be in the 128 - 2000 core range, all on commodity hardware. </div><div class=""><br class=""></div><div class="">Thank you!</div><div class=""><br class=""></div><div class="">- Jeff</div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div></body></html>