<div dir="ltr">I Third OpenHPC, or at least the Warewulf underpinnings in it. <a href="http://warewulf.lbl.gov/" target="_blank">http://warewulf.lbl.gov/</a><div><br></div><div>For "learning" the software stack you may consider beefing up your current node and running virtualized environment inside it? I use the community version of Proxmox (<a href="https://www.proxmox.com/en/downloads" target="_blank">https://www.proxmox.com/en/downloads</a>). On Ubuntu Virt-Manager+QEMU+KVM is equally capable but a bit less obvious for configuring VMS & Containers. Running 3 nodes, each with 8GB RAM and leaving 8GB for the host should be sufficient to get the software setup and test the basic adminish stuff and strategy.</div><div><br></div><div>The key things for a real cluster IMHO are:</div><div>1) SSH Configuration - ssh keys for passwordless access to all compute</div><div>2) a shared filesystem - NFS, Lustre, or for Virtual machines on severe budget Plan-9 (<a href="https://en.wikipedia.org/wiki/9P_(protocol)" target="_blank">https://en.wikipedia.org/wiki/9P_(protocol)</a>). Maybe put this NFS and a couple old disks an old Atom based machine you've been holding the door open with.</div><div>3) A capable scheduler, slurm being a current favorite but several tried and true options that may be better for your specific project</div><div>4) Systems management. Ram Based Filesystems like Warewulf supports are great because a reboot ensures that any bit-rot on a "node" is fixed.... especially if you format the local "scratch" hard disk on boot :). I see a lot of ansible and other methods that seem popular but above my pea brain or budget.</div><div>5) parallel shells. I used PDSH a lot but several attempts have been made over the years. You almost can't have too may ways to run in parallel. </div><div>6) remote power control and consoles - IPMI/BMC or equivalent is a must have when you scale up, but for the starter kit it would be good to have too. Even some really low end Stuff has them these days and it's a feature you'll quickly consider essential. For a COTS cluster without the built in BMC, this looks promising.... <a href="https://github.com/Fmstrat/diy-ipmi">https://github.com/Fmstrat/diy-ipmi</a></div><div><br></div><div>Not really required, but I mention my good friends Screen and Byobu that have saved my bacon many times when an unexpected disconnect (power / network etc) of my client would have ravaged a system into an unknown state.</div><div><br></div><div>Bonus points for folks who manage & Monitor the cluster. When something's broke does the system tell you before the users? If yes, you have the "Right Stuff" being monitored.</div><div><br></div><div>For me the notion of clusters not being heterogeneous is overstated. Assuming you compile on a given node (A Master or Login node or shell to a compute node with a dev environment installed) at a minimum you want the code to run on the other nodes. Similar generations of processors makes this pretty likely. Identical makes it simple but probably not worth the cost on an experiment/learning environment unless you plan to benchmark results. Setting up queues of nodes that are identical so that a code runs efficiently on a given subset of nodes is a fair compromise. None of this matters in the Virtual Machine environment if you decide to start there.</div><div><br></div><div>And everything Doug just said... :)</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Mar 3, 2019 at 3:25 AM John Hearns via Beowulf <<a href="mailto:beowulf@beowulf.org" target="_blank">beowulf@beowulf.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">I second OpenHPC. It is actively maintained and easy to set up.<div><br></div><div>Regarding the hardware, have a look at Doug Eadlines Limulus clusters. I think they would be a good fit.</div><div>Dougs site is excellent in general <a href="https://www.clustermonkey.net/" target="_blank">https://www.clustermonkey.net/</a></div><div><br></div><div>Also some people build Raspberry Pi clusters for learning.</div><div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, 3 Mar 2019 at 01:16, Renfro, Michael <<a href="mailto:Renfro@tntech.edu" target="_blank">Renfro@tntech.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Heterogeneous is possible, but the slower system will be a bottleneck if you have calculations that require both systems to work in parallel and synchronize with each other periodically. You might also find bottlenecks with your network interconnect, even on homogeneous systems.<br>
<br>
I’ve never used ROCKS, and OSCAR doesn’t look to have been updated in a few years (maybe it doesn’t need to be). OpenHPC is a similar product, more recently updated. But except for the cluster I manage now, I always just just went with a base operating system for the nodes and added HPC libraries and services as required.<br>
<br>
> On Mar 2, 2019, at 7:34 AM, Marco Ippolito <<a href="mailto:ippolito.marco@gmail.com" target="_blank">ippolito.marco@gmail.com</a>> wrote:<br>
> <br>
> Hi all,<br>
> <br>
> I'm developing an application which need to use tools and other applications that excel in a distributed environment: <br>
> - HPX ( <a href="https://github.com/STEllAR-GROUP/hpx" rel="noreferrer" target="_blank">https://github.com/STEllAR-GROUP/hpx</a> ) , <br>
> - Kafka ( <a href="http://kafka.apache.org/" rel="noreferrer" target="_blank">http://kafka.apache.org/</a> )<br>
> - a blockchain tool.<br>
> This is why I'm eager to learn how to deploy a beowulf cluster.<br>
> <br>
> I've read some info here:<br>
> - <a href="https://en.wikibooks.org/wiki/Building_a_Beowulf_Cluster" rel="noreferrer" target="_blank">https://en.wikibooks.org/wiki/Building_a_Beowulf_Cluster</a><br>
> - <a href="https://www.linux.com/blog/building-beowulf-cluster-just-13-steps" rel="noreferrer" target="_blank">https://www.linux.com/blog/building-beowulf-cluster-just-13-steps</a><br>
> - <a href="https://www-users.cs.york.ac.uk/~mjf/pi_cluster/src/Building_a_simple_Beowulf_cluster.html" rel="noreferrer" target="_blank">https://www-users.cs.york.ac.uk/~mjf/pi_cluster/src/Building_a_simple_Beowulf_cluster.html</a><br>
> <br>
> And I have 2 starting questions in order to clarify how I should proceed for a correct cluster building:<br>
> <br>
> 1) My starting point is a PC, I'm working with at the moment, with this features:<br>
> - Corsair Simm Memoria RAM, DDR3, PC1600, 32GB, CL10 Ven k <br>
> - Intel Ci7 Box Processore CPU 1150 i7-4790K, 4.00 GHz <br>
> - Samsung MZ-76E500B Unità SSD Interna 860 EVO, 500 GB, 2.5" SATA III, Nero/Grigio <br>
> - MB ASUS H97-PLUS <br>
> - lettore DVD-RW<br>
> <br>
> I'm using as OS Ubuntu 18.04.01 Server Edition.<br>
> <br>
> On one side I read that it should be better to put in the same cluster the same type of HW : PCs of the same type, <br>
> but on the other side also hetherogeneous HW (server or PCs) can also be deployed.<br>
> So....which HW should I take in consideration for the second node, if the features of the very first "node" are the ones above?<br>
> <br>
> 2) I read that some software (Rocks, OSCAR) would make the cluster configuration easier and smoother. But I also read that <br>
> using the same OS,<br>
> with the right same version, for all nodes, in my case Ubuntu 18.04.01 Server Edition, could be a safe starter.<br>
> So... is it strictly necessary to use Rocks or OSCAR to correctly configure the nodes network?<br>
> <br>
> Looking forward to your kind hints and suggestions.<br>
> Marco<br>
> <br>
> <br>
> _______________________________________________<br>
> Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
> To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</blockquote></div>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</blockquote></div>