<div dir="ltr">Thankyou for a comprehensive reply.</div><br><div class="gmail_quote"><div dir="ltr">On Tue, 24 Jul 2018 at 17:56, Paul Edmon <<a href="mailto:pedmon@cfa.harvard.edu">pedmon@cfa.harvard.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<p>This was several years back so the current version of Gluster may
be in better shape. We tried to use it for our primary storage
but ran into scalability problems. It especially was the case
when it came to healing bricks and doing replication. It just
didn't scale well. Eventually we abandoned it for NFS and Lustre,
NFS for deep storage and Lustre for performance. We tried it for
hosting VM images which worked pretty well but we've since moved
to Ceph for that.</p>
<p>Anyways I have no idea about current Gluster in terms of
scalability so the issues we ran into may not be an problem
anymore. However it has made us very gun shy about trying Gluster
again. Instead we've decided to use Ceph as we've gained a bunch
of experience with Ceph in our OpenNebula installation.</p>
-Paul Edmon-<br>
<br>
<div class="m_3801412559532363012moz-cite-prefix">On 07/24/2018 11:02 AM, John Hearns via
Beowulf wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>Paul, thanks for the reply.</div>
<div>I would like to ask, if I may. I rather like Glustre, but
have not deployed it in HPC. I have heard a few people comment
about Gluster not working well in HPC. Would you be willing to
be more specific?</div>
<div><br>
</div>
<div>One research site I talked to did the classic 'converged
infrastructure' idea of attaching storage drives to their
compute nodes and distributing Glustre storage. They were not
happy with that IW as told, and I can very much understand
why. But Gluster on dedicated servers I would be interested to
hear about.</div>
<br>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr">On Tue, 24 Jul 2018 at 16:41, Paul Edmon <<a href="mailto:pedmon@cfa.harvard.edu" target="_blank">pedmon@cfa.harvard.edu</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<p>While I agree with you in principle, one also has to deal
with the reality as you find yourself in. In our case we
have more experience with Lustre than Ceph in an HPC and
we got burned pretty badly by Gluster. While I like Ceph
in principle I haven't seen it do what Lustre can do in a
HPC setting over IB. Now it may be able to do that, which
is great. However then you have to get your system set up
to do that and prove that it can. After all users have a
funny way of breaking things that work amazingly well in
controlled test environs, especially when you have no
control how they will actually use the system (as in a
research environment). Certainly we are working on
exploring this option too as it would be awesome and save
many headaches.<br>
</p>
<p>Anyways no worries about you being a smartarse, it is a
valid point. One just needs to consider the realities on
the ground in ones own environment.</p>
<p>-Paul Edmon-<br>
</p>
<br>
<div class="m_3801412559532363012m_2255964174249645937moz-cite-prefix">On
07/24/2018 10:31 AM, John Hearns via Beowulf wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>Forgive me for saying this, but the philosophy for
software defined storage such as CEPH and Gluster is
that forklift style upgrades should not be necessary.</div>
<div>When a storage server is to be retired the data is
copied onto the new server then the old one taken out
of service. Well, copied is not the correct word, as
there are erasure-coded copies of the data. Rebalanced
is probaby a better word.</div>
<div><br>
</div>
<div>Sorry if I am seeming to be a smartarse. I have
gone through the pain of forklift style upgrades in
the past when storage arrays reach End of Life.</div>
<div>I just really like the Software Defined Storage
mantra - no component should be a point of failure.<br>
</div>
</div>
<br>
<fieldset class="m_3801412559532363012m_2255964174249645937mimeAttachmentHeader"></fieldset>
<br>
<pre>_______________________________________________
Beowulf mailing list, <a class="m_3801412559532363012m_2255964174249645937moz-txt-link-abbreviated" href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit <a class="m_3801412559532363012m_2255964174249645937moz-txt-link-freetext" href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a>
</pre>
</blockquote>
<br>
</div>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a>
sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit
<a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</blockquote>
</div>
<br>
<fieldset class="m_3801412559532363012mimeAttachmentHeader"></fieldset>
<br>
<pre>_______________________________________________
Beowulf mailing list, <a class="m_3801412559532363012moz-txt-link-abbreviated" href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit <a class="m_3801412559532363012moz-txt-link-freetext" href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a>
</pre>
</blockquote>
<br>
</div>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</blockquote></div>