[Beowulf] CentOS 7.x for cluster nodes ?

Lachlan Musicman datakid at gmail.com
Thu Dec 29 22:17:19 PST 2016

We use Centos 7.2 exclusively in our cluster (SLURM, 12 Nodes going up to
40 in the new year) and it works a treat. Same set up as you, but with some
shared NFS mounts. Systemd is fine - a few more keystrokes, but not the end
of the world.

Very happy


The most dangerous phrase in the language is, "We've always done it this

- Grace Hopper

On 30 December 2016 at 17:12, Andrew Mather <mathera at gmail.com> wrote:

> Hi All,
> Hope you're having/had time to relax and unwind with those near and dear.
> We are in the very early planning stages for our next cluster and I'm
> currently looking at the OS.  We're a CentOS shop and planning to stay that
> way for the forseeable future, so please, no partisan OS wars :)
> When v7 of the Redhat-based OS' appeared, the change to systemd in
> particular, seemed to attract a lot of hate, but since it's been out a
> while, there doesn't seem to be as much.
> So, has anyone got recent war-stories, good experiences etc to share about
> v7 of CentOS specifically as the OS for cluster nodes.
> We don't have infiniband interconnects and don't use MPI, shared memory
> and the like.  All our jobs stay within the confines of the nodes and we
> have a variety of hardware configurations to accommodate different types of
> job (RAM, disk requirements etc)
> I'd welcome any info.
> Thanks and hope 2017 is kind for you.
> Andrew
> --
> -
>  https://picasaweb.google.com/107747436224613508618
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> "Voting is a lot like going to Bunnings really:
> You walk in confused, you stand in line, you have a sausage on the way out and
> at the end, you wind up with a bunch of useless tools"
> Joe Rios
> -
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20161230/1c61d1fc/attachment.html>

More information about the Beowulf mailing list