<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body dir="auto">
<div>At least in my case, I don’t do anything VM-specific for my setups, and treat them as close to bare metal as I can:</div>
<div><br>
</div>
<div>- I start with a router VM (pfSense, shorewall, etc,)</div>
<div>- I set up one or more dumb layer 2 switch interconnects among the router and other nodes as needed</div>
<div>- I start provisioning management and other nodes: setting up DHCP and PXE, no cloning of installed VMs, etc.</div>
<div>- I work over ssh as soon as it’s available</div>
<br>
<div dir="ltr">
<blockquote type="cite">On Feb 10, 2020, at 7:55 AM, Lux, Jim (US 337K) via Beowulf <beowulf@beowulf.org> wrote:<br>
</blockquote>
</div>
<blockquote type="cite">
<div dir="ltr"><br>
<div>
<div class="WordSection1">
<p class="MsoNormal">One comment on “building a cluster with VMs”<o:p></o:p></p>
<p class="MsoNormal"><br>
Part of bringing up a cluster is learning how to manage the interconnects, and loading software into the nodes, and then finding the tools to manage a bunch of different machines simultaneously, as well as issues around shared network drives, boot images, etc.<br>
<br>
I would think (but have not tried) that the multi-VM approach is a bit too unrealistically easy – I assume you can do MPI between VMs, so you could certainly practice with parallel coding. But it seems that spinning up identical instances, all that can see
the same host resources, on the same machine with the same display and keyboard kind of bypasses a lot of the hard stuff.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">OTOH, If you want a cheap experience at getting the booting working, controlling multiple machines, learning pdsh, etc. you could just get 3 or 4 Rpis or beagles, and face all the problems of a real cluster (including managing a rat’s nest
of wires and cables)<br>
<br>
<br>
<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal"><b><span style="font-size:12.0pt;color:black">From: </span></b><span style="font-size:12.0pt;color:black">Beowulf <beowulf-bounces@beowulf.org> on behalf of "jaquilina@eagleeyet.net" <jaquilina@eagleeyet.net><br>
<b>Date: </b>Sunday, February 9, 2020 at 10:30 PM<br>
<b>To: </b>"Renfro, Michael" <Renfro@tntech.edu>, "beowulf@beowulf.org" <beowulf@beowulf.org><br>
<b>Subject: </b>[EXTERNAL] Re: [Beowulf] Have machine, will compute: ESXi or bare metal?<o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<p class="MsoNormal"><span lang="EN-GB">Hi Guys just piggy backing on this thread</span><o:p></o:p></p>
<p class="MsoNormal"><span lang="EN-GB"> </span><o:p></o:p></p>
<p class="MsoNormal"><span lang="EN-GB">I am considering upgrading my pc to 64gb of ram and setting it up as a win 10 based hyper-v host. Would you say this is a good way to learn how to put a cluster together with out the need to invest in a small number of
servers? My pc is a ryzen 5 3600 6 core 12 thread cpu motherboard is an msi b450 tomahawk max gaming motherboard currently 32gb ddr4 3200 upgradable to 64.</span><o:p></o:p></p>
<p class="MsoNormal"><span lang="EN-GB"> </span><o:p></o:p></p>
<p class="MsoNormal"><span lang="EN-GB">Let me know your thoughts.</span><o:p></o:p></p>
<p class="MsoNormal"> <o:p></o:p></p>
<div>
<p class="MsoNormal"><span lang="EN-GB">Regards,</span><o:p></o:p></p>
<p class="MsoNormal"><span lang="EN-GB">Jonathan Aquilina</span><o:p></o:p></p>
<p class="MsoNormal"><span lang="EN-GB"> </span><o:p></o:p></p>
<p class="MsoNormal"><span lang="EN-GB">EagleEyeT</span><o:p></o:p></p>
<p class="MsoNormal"><span lang="EN-GB">Phone +356 20330099</span><o:p></o:p></p>
<p class="MsoNormal"><span lang="EN-GB">Sales – <a href="mailto:sales@eagleeyet.net">
sales@eagleeyet.net</a></span><o:p></o:p></p>
<p class="MsoNormal"><span lang="EN-GB">Support – support@eagleeyet.net</span><o:p></o:p></p>
</div>
<p class="MsoNormal"> <o:p></o:p></p>
<div>
<div style="border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal"><b>From:</b> Beowulf <beowulf-bounces@beowulf.org> <b>On Behalf Of
</b>Renfro, Michael<br>
<b>Sent:</b> Monday, 10 February 2020 03:17<br>
<b>To:</b> beowulf@beowulf.org<br>
<b>Subject:</b> Re: [Beowulf] Have machine, will compute: ESXi or bare metal?<o:p></o:p></p>
</div>
</div>
<p class="MsoNormal"> <o:p></o:p></p>
<p class="MsoNormal">No reason you can’t, especially if you’re not interested in benchmark runs (there’s a chance that if you ran a lot of heavily-loaded VMs, there could be CPU contention on the host).
<o:p></o:p></p>
<div>
<p class="MsoNormal"> <o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">Any cluster development work I’ve done lately has used VMware VMs exclusively.<o:p></o:p></p>
<div>
<p class="MsoNormal"><br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal" style="margin-bottom:12.0pt">On Feb 9, 2020, at 7:10 PM, Mark Kosmowski <mark.kosmowski@solidstatecomputation.com> wrote:<o:p></o:p></p>
</blockquote>
</div>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<div>
<p align="center" style="margin:0in;margin-bottom:.0001pt;text-align:center;background:white">
<b><span style="font-size:12.0pt;color:red;background:white">External Email Warning</span></b><o:p></o:p></p>
<p align="center" style="mso-margin-top-alt:0in;margin-right:12.0pt;margin-bottom:0in;margin-left:12.0pt;margin-bottom:.0001pt;text-align:center;background:white">
<b><span style="font-size:12.0pt;color:red">This email originated from outside the university. Please use caution when opening attachments, clicking links, or responding to requests.</span></b><o:p></o:p></p>
<div class="MsoNormal" align="center" style="text-align:center">
<hr size="0" width="100%" align="center">
</div>
<div>
<p>I purchased a Cisco UCS C460 M2 (4 @ 10 core Xeons, 128 GB total RAM) for $115 in my local area. If I used ESXi (free license), I am limited to 8 vcpu per VM. Could I make a virtual Beowulf cluster out of some of these VMs? I'm thinking this way I can
learn cluster admin without paying the power bill for my ancient Opteron boxes and also scratch my illumos itch while computing on Linux.<o:p></o:p></p>
<p>Thank you!<o:p></o:p></p>
</div>
<p class="MsoNormal">_______________________________________________<br>
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit https://beowulf.org/cgi-bin/mailman/listinfo/beowulf<o:p></o:p></p>
</div>
</blockquote>
</div>
</div>
</div>
<span>_______________________________________________</span><br>
<span>Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing</span><br>
<span>To change your subscription (digest mode or unsubscribe) visit https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</span><br>
</div>
</blockquote>
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p.msonormal0, li.msonormal0, div.msonormal0
{mso-style-name:msonormal;
mso-margin-top-alt:auto;
margin-right:0in;
mso-margin-bottom-alt:auto;
margin-left:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
span.EmailStyle16
{mso-style-type:personal;
font-family:"Calibri",sans-serif;
color:windowtext;}
span.EmailStyle20
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style>
</body>
</html>