<div dir="ltr"><div>Jorg,</div><div>you should look at BeeGFS and BeeOnDemand <a href="https://www.beegfs.io/wiki/BeeOND">https://www.beegfs.io/wiki/BeeOND</a><br></div></div><br><div class="gmail_quote"><div dir="ltr">On Thu, 26 Jul 2018 at 09:15, Jörg Saßmannshausen <<a href="mailto:sassy-work@sassy.formativ.net">sassy-work@sassy.formativ.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Dear all,<br>
<br>
I once had this idea as well: using the spinning discs which I have in the <br>
compute nodes as part of a distributed scratch space. I was using glusterfs <br>
for that as I thought it might be a good idea. It was not. The reason behind <br>
it is that as soon as a job is creating say 700 GB of scratch data (real job <br>
not some fictional one!), the performance of the node which is hosting part of <br>
that data approaches zero due to the high disc IO. This meant that the job <br>
which was running there was affected. So in the end this led to an <br>
installation which got a separate file server for the scratch space. <br>
I also should add that this was a rather small setup of 8 nodes and it was a <br>
few years back. <br>
The problem I found in computational chemistry is that some jobs require <br>
either large amount of memory, i.e. significantly more than the usual 2 GB per <br>
core, or large amount of scratch space (if there is insufficient memory). You <br>
are in trouble if it requires both. :-)<br>
<br>
All the best from a still hot London<br>
<br>
Jörg<br>
<br>
Am Dienstag, 24. Juli 2018, 17:02:43 BST schrieb John Hearns via Beowulf:<br>
> Paul, thanks for the reply.<br>
> I would like to ask, if I may. I rather like Glustre, but have not deployed<br>
> it in HPC. I have heard a few people comment about Gluster not working well<br>
> in HPC. Would you be willing to be more specific?<br>
> <br>
> One research site I talked to did the classic 'converged infrastructure'<br>
> idea of attaching storage drives to their compute nodes and distributing<br>
> Glustre storage. They were not happy with that IW as told, and I can very<br>
> much understand why. But Gluster on dedicated servers I would be interested<br>
> to hear about.<br>
> <br>
> On Tue, 24 Jul 2018 at 16:41, Paul Edmon <<a href="mailto:pedmon@cfa.harvard.edu" target="_blank">pedmon@cfa.harvard.edu</a>> wrote:<br>
> > While I agree with you in principle, one also has to deal with the reality<br>
> > as you find yourself in. In our case we have more experience with Lustre<br>
> > than Ceph in an HPC and we got burned pretty badly by Gluster. While I<br>
> > like Ceph in principle I haven't seen it do what Lustre can do in a HPC<br>
> > setting over IB. Now it may be able to do that, which is great. However<br>
> > then you have to get your system set up to do that and prove that it can.<br>
> > After all users have a funny way of breaking things that work amazingly<br>
> > well in controlled test environs, especially when you have no control how<br>
> > they will actually use the system (as in a research environment).<br>
> > Certainly we are working on exploring this option too as it would be<br>
> > awesome and save many headaches.<br>
> > <br>
> > Anyways no worries about you being a smartarse, it is a valid point. One<br>
> > just needs to consider the realities on the ground in ones own<br>
> > environment.<br>
> > <br>
> > -Paul Edmon-<br>
> > <br>
> > On 07/24/2018 10:31 AM, John Hearns via Beowulf wrote:<br>
> > <br>
> > Forgive me for saying this, but the philosophy for software defined<br>
> > storage such as CEPH and Gluster is that forklift style upgrades should<br>
> > not<br>
> > be necessary.<br>
> > When a storage server is to be retired the data is copied onto the new<br>
> > server then the old one taken out of service. Well, copied is not the<br>
> > correct word, as there are erasure-coded copies of the data. Rebalanced is<br>
> > probaby a better word.<br>
> > <br>
> > Sorry if I am seeming to be a smartarse. I have gone through the pain of<br>
> > forklift style upgrades in the past when storage arrays reach End of Life.<br>
> > I just really like the Software Defined Storage mantra - no component<br>
> > should be a point of failure.<br>
> > <br>
> > <br>
> > _______________________________________________<br>
> > Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
> > To change your subscription (digest mode or unsubscribe) visit<br>
> > <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
> > <br>
> > <br>
> > _______________________________________________<br>
> > Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
> > To change your subscription (digest mode or unsubscribe) visit<br>
> > <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</blockquote></div>