[Beowulf] shared compute/storage WAS: Re: Lustre Upgrades

John Hearns hearnsj at googlemail.com
Thu Jul 26 06:30:57 PDT 2018


>in theory you could cap the performance interference using VM's and
>cgroup controls, but i'm not sure how effective that actually is (no
>data) in HPC.

I looked quite heavily at performance capping for RDMA applications in
cgroups about a year ago.
It is very doable, however you need a recent 4-series kernel. Sadly we were
using 3-series kernels on RHEL
Parav Pandit is the go-to guy for this
https://www.openfabrics.org/images/eventpresos/2016presentations/115rdmacont.pdf










On Thu, 26 Jul 2018 at 15:27, John Hearns <hearnsj at googlemail.com> wrote:

> For VM substitute 'container' - since containerisation is intimately
> linked with cgroups anyway.
> Google 'CEPH Docker' and there is plenty of information.
>
> Someone I work with tried out CEPH on Dockerr the other day, and got into
> some knots regarding access to the actual hardware devices.
> He then downloaded Minio and got it working very rapidly. Sorry - I am
> only repeating this story second hand.
>
>
>
>
>
>
>
>
>
> On Thu, 26 Jul 2018 at 15:20, Michael Di Domenico <mdidomenico4 at gmail.com>
> wrote:
>
>> On Thu, Jul 26, 2018 at 3:14 AM, Jörg Saßmannshausen
>> <sassy-work at sassy.formativ.net> wrote:
>> > I once had this idea as well: using the spinning discs which I have in
>> the
>> > compute nodes as part of a distributed scratch space. I was using
>> glusterfs
>> > for that as I thought it might be a good idea. It was not.
>>
>> i split the thread as to not pollute the other discussion.
>>
>> I'm curious if anyone has any hard data on the above, but
>> encapsulating the compute from the storage using VM's instead of just
>> separate processes?
>>
>> in theory you could cap the performance interference using VM's and
>> cgroup controls, but i'm not sure how effective that actually is (no
>> data) in HPC.
>>
>> I've been thinking about this recently to rebalance some of the rack
>> loading throughout my data center.   yes, i can move things around
>> within the racks, but then it turns into a cabling nightmare.
>>
>> discuss?
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit
>> http://www.beowulf.org/mailman/listinfo/beowulf
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20180726/c0d3eb2f/attachment-0001.html>


More information about the Beowulf mailing list