[Beowulf] shared compute/storage WAS: Re: Lustre Upgrades

Jonathan Engwall engwalljonathanthereal at gmail.com
Thu Jul 26 11:13:45 PDT 2018


This made me think about distributed routing. This:
https://wiki.openstack.org/wiki/Distributed_Router_for_OVS
Might be my next horrible idea. It looks interesting.
It seems to me that moving the load off of heavily hit machines could be
accomplished with elastic deployment and distributed routing.
Presently I only have enough power to test these ideas and cross my fingers.

On Thu, Jul 26, 2018, 6:50 AM John Hearns via Beowulf <beowulf at beowulf.org>
wrote:

> As we are discussing storage performance, may I slightly blow the trumpet
> for someone else
>
> https://www.ellexus.com/ellexus-contributes-to-global-paper-on-how-to-analyse-i-o/
> https://arxiv.org/abs/1807.04985
>
>
>
>
> On Thu, 26 Jul 2018 at 15:45, Michael Di Domenico <mdidomenico4 at gmail.com>
> wrote:
>
>> On Thu, Jul 26, 2018 at 9:30 AM, John Hearns via Beowulf
>> <beowulf at beowulf.org> wrote:
>> >>in theory you could cap the performance interference using VM's and
>> >>cgroup controls, but i'm not sure how effective that actually is (no
>> >>data) in HPC.
>> >
>> > I looked quite heavily at performance capping for RDMA applications in
>> > cgroups about a year ago.
>> > It is very doable, however you need a recent 4-series kernel. Sadly we
>> were
>> > using 3-series kernels on RHEL
>>
>> interesting, though i'm not sure i'd dive that deep.  for one i'm
>> generally restricted to rhel, so that means a 3.x kernel right now.
>>
>> but also i feel like this might be an area where VM's might provide a
>> layer of management that containers don't.  i could conceive that the
>> storage and compute VM's might not necessarily run the same kernel
>> version and/or O/S
>>
>> i'd also be more amenable to having two high speed nic's both IB or
>> one IB one 40GigE, one each for the VM's, rather then fair-sharing the
>> work queues of one IB card
>>
>> dunno, just spit balling here.  maybe something sticks enough for me
>> to standup something with my older cast off hardware
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit
>> http://www.beowulf.org/mailman/listinfo/beowulf
>>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20180726/146689ee/attachment-0001.html>


More information about the Beowulf mailing list