[Beowulf] shared compute/storage WAS: Re: Lustre Upgrades
John Hearns
hearnsj at googlemail.com
Thu Jul 26 06:48:56 PDT 2018
As we are discussing storage performance, may I slightly blow the trumpet
for someone else
https://www.ellexus.com/ellexus-contributes-to-global-paper-on-how-to-analyse-i-o/
https://arxiv.org/abs/1807.04985
On Thu, 26 Jul 2018 at 15:45, Michael Di Domenico <mdidomenico4 at gmail.com>
wrote:
> On Thu, Jul 26, 2018 at 9:30 AM, John Hearns via Beowulf
> <beowulf at beowulf.org> wrote:
> >>in theory you could cap the performance interference using VM's and
> >>cgroup controls, but i'm not sure how effective that actually is (no
> >>data) in HPC.
> >
> > I looked quite heavily at performance capping for RDMA applications in
> > cgroups about a year ago.
> > It is very doable, however you need a recent 4-series kernel. Sadly we
> were
> > using 3-series kernels on RHEL
>
> interesting, though i'm not sure i'd dive that deep. for one i'm
> generally restricted to rhel, so that means a 3.x kernel right now.
>
> but also i feel like this might be an area where VM's might provide a
> layer of management that containers don't. i could conceive that the
> storage and compute VM's might not necessarily run the same kernel
> version and/or O/S
>
> i'd also be more amenable to having two high speed nic's both IB or
> one IB one 40GigE, one each for the VM's, rather then fair-sharing the
> work queues of one IB card
>
> dunno, just spit balling here. maybe something sticks enough for me
> to standup something with my older cast off hardware
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20180726/c844a649/attachment.html>
More information about the Beowulf
mailing list