<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div><span style="background-color: rgba(255, 255, 255, 0);">To each their own, I suppose, but I've never found myself with so much free time managing our equipment that I would've thought it a good idea to have written cluster management software - a duplication of what already exists in several forms - instead of working on something that directly affects my users (more up to date software, performance tuning, etc). The flipside about there being nothing particularly complicated about it is that I don't believe there is a compelling reason to be manually editing DHCP/TFTP configuration files and other tedium. How many unique ways of working on a cluster are there, really, that it's worth giving up that much free labor? If I had the time to write one of these things, I'd probably give it to one of the existing projects in the form of tweaks that would help my use case. Anyway, my two cents. </span><br><br>--<br><span style="background-color: rgba(255, 255, 255, 0);">____ *Note: UMDNJ is now Rutgers-Biomedical and Health Sciences*<br>|| \\UTGERS |---------------------*O*---------------------<br>||_// Biomedical | Ryan Novosielski - Senior Technologist<br>|| \\ and Health | <a href="mailto:novosirj@rutgers.edu" x-apple-data-detectors="true" x-apple-data-detectors-type="link" x-apple-data-detectors-result="3">novosirj@rutgers.edu</a>- 973/972.0922 (2x0922)<br>|| \\ Sciences | OIRT/High Perf & Res Comp - MSB C630, Newark<br> `'</span></div><div><br>On Nov 5, 2015, at 23:49, Stu Midgley <<a href="mailto:sdm900@gmail.com">sdm900@gmail.com</a>> wrote:<br><br></div><blockquote type="cite"><div><span>Write your own. I personally find all the packaged systems way too</span><br><span>stifling and don't do what you want, so you end up bending how you</span><br><span>want to work.</span><br><span></span><br><span>It is relatively simple to setup pxe booting and network booting from</span><br><span>nfs or lustre or any other shared file system (or just rsync down the</span><br><span>image to a ram disk).</span><br><span></span><br><span>At least then, you have a bash script that you can tune to do what you want.</span><br><span></span><br><span>Once you have a booted image, pdsh is about all you need.</span><br><span></span><br><span></span><br><span>On Fri, Nov 6, 2015 at 10:58 AM, Novosielski, Ryan</span><br><span><<a href="mailto:novosirj@ca.rutgers.edu">novosirj@ca.rutgers.edu</a>> wrote:</span><br><blockquote type="cite"><span>Another vote here for Warewulf. Good stuff. Easy to use, but not lacking any features I need.</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>--</span><br></blockquote><blockquote type="cite"><span>____ *Note: UMDNJ is now Rutgers-Biomedical and Health Sciences*</span><br></blockquote><blockquote type="cite"><span> || \\UTGERS |---------------------*O*---------------------</span><br></blockquote><blockquote type="cite"><span> ||_// Biomedical | Ryan Novosielski - Senior Technologist</span><br></blockquote><blockquote type="cite"><span> || \\ and Health | <a href="mailto:novosirj@rutgers.edu">novosirj@rutgers.edu</a> - 973/972.0922 (2x0922)</span><br></blockquote><blockquote type="cite"><span> || \\ Sciences | OIRT/High Perf & Res Comp - MSB C630, Newark</span><br></blockquote><blockquote type="cite"><span> `'</span><br></blockquote><blockquote type="cite"><span>________________________________________</span><br></blockquote><blockquote type="cite"><span>From: Beowulf [<a href="mailto:beowulf-bounces@beowulf.org">beowulf-bounces@beowulf.org</a>] On Behalf Of Vaughn Clinton [<a href="mailto:vclinton@msn.com">vclinton@msn.com</a>]</span><br></blockquote><blockquote type="cite"><span>Sent: Thursday, November 05, 2015 9:40 PM</span><br></blockquote><blockquote type="cite"><span>To: Chris Samuel; <a href="mailto:beowulf@beowulf.org">beowulf@beowulf.org</a></span><br></blockquote><blockquote type="cite"><span>Subject: Re: [Beowulf] Diskless cluster provisioning/installation</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>XCat is long in the tooth now. I'd take serious look at WareWulf. I've used WW and was happy with it:</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span><a href="http://warewulf.lbl.gov/trac">http://warewulf.lbl.gov/trac</a></span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><blockquote type="cite"><span>From: <a href="mailto:samuel@unimelb.edu.au">samuel@unimelb.edu.au</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>To: <a href="mailto:beowulf@beowulf.org">beowulf@beowulf.org</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Date: Fri, 6 Nov 2015 10:52:54 +1100</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Subject: Re: [Beowulf] Diskless cluster provisioning/installation</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>On Wed, 4 Nov 2015 05:15:10 PM Matthew Wallis wrote:</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>xCAT is still fairly popular.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>This is what we use here on our IBM and Lenovo gear (and previously our SGI</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>gear too) for statelite (diskless nodes booting RAMdisk with NFS mounts for</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>certain files & directories that we want to preserve information in such as</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>GPFS config, Slurm logs, etc).</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span><a href="http://sourceforge.net/p/xcat/wiki/XCAT_Linux_Statelite/">http://sourceforge.net/p/xcat/wiki/XCAT_Linux_Statelite/</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>All the best,</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Chris</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>--</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Christopher Samuel Senior Systems Administrator</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>VLSCI - Victorian Life Sciences Computation Initiative</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Email: <a href="mailto:samuel@unimelb.edu.au">samuel@unimelb.edu.au</a> Phone: +61 (0)3 903 55545</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span><a href="http://www.vlsci.org.au/">http://www.vlsci.org.au/</a> <a href="http://twitter.com/vlsci">http://twitter.com/vlsci</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>_______________________________________________</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a> sponsored by Penguin Computing</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf">http://www.beowulf.org/mailman/listinfo/beowulf</a></span><br></blockquote></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>_______________________________________________</span><br></blockquote><blockquote type="cite"><span>Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a> sponsored by Penguin Computing</span><br></blockquote><blockquote type="cite"><span>To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf">http://www.beowulf.org/mailman/listinfo/beowulf</a></span><br></blockquote><span></span><br><span></span><br><span></span><br><span>-- </span><br><span>Dr Stuart Midgley</span><br><span><a href="mailto:sdm900@sdm900.com">sdm900@sdm900.com</a></span><br></div></blockquote></body></html>