[Beowulf] shared compute/storage WAS: Re: Lustre Upgrades
Michael Di Domenico
mdidomenico4 at gmail.com
Thu Jul 26 06:20:29 PDT 2018
On Thu, Jul 26, 2018 at 3:14 AM, Jörg Saßmannshausen
<sassy-work at sassy.formativ.net> wrote:
> I once had this idea as well: using the spinning discs which I have in the
> compute nodes as part of a distributed scratch space. I was using glusterfs
> for that as I thought it might be a good idea. It was not.
i split the thread as to not pollute the other discussion.
I'm curious if anyone has any hard data on the above, but
encapsulating the compute from the storage using VM's instead of just
in theory you could cap the performance interference using VM's and
cgroup controls, but i'm not sure how effective that actually is (no
data) in HPC.
I've been thinking about this recently to rebalance some of the rack
loading throughout my data center. yes, i can move things around
within the racks, but then it turns into a cabling nightmare.
More information about the Beowulf