<div dir="auto">This made me think about distributed routing. This:<div dir="auto"><a href="https://wiki.openstack.org/wiki/Distributed_Router_for_OVS">https://wiki.openstack.org/wiki/Distributed_Router_for_OVS</a><br></div><div dir="auto">Might be my next horrible idea. It looks interesting.</div><div dir="auto">It seems to me that moving the load off of heavily hit machines could be accomplished with elastic deployment and distributed routing.</div><div dir="auto">Presently I only have enough power to test these ideas and cross my fingers.</div></div><br><div class="gmail_quote"><div dir="ltr">On Thu, Jul 26, 2018, 6:50 AM John Hearns via Beowulf <<a href="mailto:beowulf@beowulf.org">beowulf@beowulf.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>As we are discussing storage performance, may I slightly blow the trumpet for someone else</div><div><a href="https://www.ellexus.com/ellexus-contributes-to-global-paper-on-how-to-analyse-i-o/" target="_blank" rel="noreferrer">https://www.ellexus.com/ellexus-contributes-to-global-paper-on-how-to-analyse-i-o/</a></div><div><a href="https://arxiv.org/abs/1807.04985" target="_blank" rel="noreferrer">https://arxiv.org/abs/1807.04985</a></div><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr">On Thu, 26 Jul 2018 at 15:45, Michael Di Domenico <<a href="mailto:mdidomenico4@gmail.com" target="_blank" rel="noreferrer">mdidomenico4@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Thu, Jul 26, 2018 at 9:30 AM, John Hearns via Beowulf<br>
<<a href="mailto:beowulf@beowulf.org" target="_blank" rel="noreferrer">beowulf@beowulf.org</a>> wrote:<br>
>>in theory you could cap the performance interference using VM's and<br>
>>cgroup controls, but i'm not sure how effective that actually is (no<br>
>>data) in HPC.<br>
><br>
> I looked quite heavily at performance capping for RDMA applications in<br>
> cgroups about a year ago.<br>
> It is very doable, however you need a recent 4-series kernel. Sadly we were<br>
> using 3-series kernels on RHEL<br>
<br>
interesting, though i'm not sure i'd dive that deep. for one i'm<br>
generally restricted to rhel, so that means a 3.x kernel right now.<br>
<br>
but also i feel like this might be an area where VM's might provide a<br>
layer of management that containers don't. i could conceive that the<br>
storage and compute VM's might not necessarily run the same kernel<br>
version and/or O/S<br>
<br>
i'd also be more amenable to having two high speed nic's both IB or<br>
one IB one 40GigE, one each for the VM's, rather then fair-sharing the<br>
work queues of one IB card<br>
<br>
dunno, just spit balling here. maybe something sticks enough for me<br>
to standup something with my older cast off hardware<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank" rel="noreferrer">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer noreferrer" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</blockquote></div>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank" rel="noreferrer">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer noreferrer" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</blockquote></div>