<div dir="ltr">We've had pretty good luck with BeeGFS lately running on SuperMicro vanilla hardware with ZFS as the underlying filesystem. It works pretty well for the cheap end of the hardware spectrum and BeeGFS is free and pretty amazing. It has held up to abuse under a very mixed and heavy workload and we can stream large sequential data into it fast enough to saturate a QDR IB link, all without any in depth tuning. While we don't have redundancy (other than raidz3), BeeGFS can be set up with some redundancy between metadata servers and mirroring between storage. <a href="http://www.beegfs.com/content/">http://www.beegfs.com/content/</a><div><div><div><br></div><div>jbh</div></div></div><br><div class="gmail_quote"><div dir="ltr">On Mon, Feb 13, 2017 at 7:40 PM Alex Chekholko <<a href="mailto:alex.chekholko@gmail.com">alex.chekholko@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">If you have a preference for Free Software, GlusterFS would work, unless you have many millions of small files. It would also depend on your available hardware, as there is not a 1-to-1 correspondence between a typical GPFS setup and a typical GlusterFS setup. But at least it is free and easy to try out. The mailing list is active, the software is now mature ( I last used GlusterFS a few years ago) and you can buy support from Red Hat if you like.<br class="gmail_msg"><br class="gmail_msg">Take a look at the RH whitepapers about typical GlusterFS architecture.<br class="gmail_msg"><br class="gmail_msg">CephFS, on the other hand, is not yet mature enough, IMHO.<br class="gmail_msg"><div class="gmail_quote gmail_msg"><div dir="ltr" class="gmail_msg">On Mon, Feb 13, 2017 at 8:31 AM Justin Y. Shi <<a href="mailto:shi@temple.edu" class="gmail_msg" target="_blank">shi@temple.edu</a>> wrote:<br class="gmail_msg"></div><blockquote class="gmail_quote gmail_msg" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr" class="gmail_msg">Maybe you would consider Scality (<a href="http://www.scality.com/" class="gmail_msg" target="_blank">http://www.scality.com/</a>) for your growth concerns. If you need speed, DDN is faster in rapid data ingestion and for extreme HPC data needs.</div><div dir="ltr" class="gmail_msg"><div class="gmail_msg"><br class="gmail_msg"></div><div class="gmail_msg">Justin </div></div><div class="gmail_extra gmail_msg"><br class="gmail_msg"><div class="gmail_quote gmail_msg">On Mon, Feb 13, 2017 at 4:32 AM, Tony Brian Albers <span dir="ltr" class="gmail_msg"><<a href="mailto:tba@kb.dk" class="gmail_msg" target="_blank">tba@kb.dk</a>></span> wrote:<br class="gmail_msg"><blockquote class="gmail_quote gmail_msg" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="gmail_msg">On 2017-02-13 09:36, Benson Muite wrote:<br class="gmail_msg">
> Hi,<br class="gmail_msg">
><br class="gmail_msg">
> Do you have any performance requirements?<br class="gmail_msg">
><br class="gmail_msg">
> Benson<br class="gmail_msg">
><br class="gmail_msg">
> On 02/13/2017 09:55 AM, Tony Brian Albers wrote:<br class="gmail_msg">
>> Hi guys,<br class="gmail_msg">
>><br class="gmail_msg">
>> So, we're running a small(as in a small number of nodes(10), not<br class="gmail_msg">
>> storage(170TB)) hadoop cluster here. Right now we're on IBM Spectrum<br class="gmail_msg">
>> Scale(GPFS) which works fine and has POSIX support. On top of GPFS we<br class="gmail_msg">
>> have a GPFS transparency connector so that HDFS uses GPFS.<br class="gmail_msg">
>><br class="gmail_msg">
>> Now, if I'd like to replace GPFS with something else, what should I use?<br class="gmail_msg">
>> It needs to be a fault-tolerant DFS, with POSIX support(so that users<br class="gmail_msg">
>> can move data to and from it with standard tools).<br class="gmail_msg">
>><br class="gmail_msg">
>> I've looked at MooseFS which seems to be able to do the trick, but are<br class="gmail_msg">
>> there any others that might do?<br class="gmail_msg">
>><br class="gmail_msg">
>> TIA<br class="gmail_msg">
>><br class="gmail_msg">
><br class="gmail_msg">
<br class="gmail_msg">
</span>Well, we're not going to be doing a huge amount of I/O. So performance<br class="gmail_msg">
requirements are not high. But ingest needs to be really fast, we're<br class="gmail_msg">
talking tens of terabytes here.<br class="gmail_msg">
<span class="m_-1164830425095355951m_-6847030932552244759HOEnZb gmail_msg"><font color="#888888" class="gmail_msg"><br class="gmail_msg">
/tony<br class="gmail_msg">
</font></span><span class="m_-1164830425095355951m_-6847030932552244759im m_-1164830425095355951m_-6847030932552244759HOEnZb gmail_msg"><br class="gmail_msg">
--<br class="gmail_msg">
Best regards,<br class="gmail_msg">
<br class="gmail_msg">
Tony Albers<br class="gmail_msg">
Systems administrator, IT-development<br class="gmail_msg">
Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark.<br class="gmail_msg">
Tel: <a href="tel:%2B45%202566%202383" value="+4525662383" class="gmail_msg" target="_blank">+45 2566 2383</a> / <a href="tel:%2B45%208946%202316" value="+4589462316" class="gmail_msg" target="_blank">+45 8946 2316</a><br class="gmail_msg">
</span><div class="m_-1164830425095355951m_-6847030932552244759HOEnZb gmail_msg"><div class="m_-1164830425095355951m_-6847030932552244759h5 gmail_msg">_______________________________________________<br class="gmail_msg">
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" class="gmail_msg" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br class="gmail_msg">
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" class="gmail_msg" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br class="gmail_msg">
</div></div></blockquote></div><br class="gmail_msg"></div>
_______________________________________________<br class="gmail_msg">
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" class="gmail_msg" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br class="gmail_msg">
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" class="gmail_msg" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br class="gmail_msg">
</blockquote></div>
_______________________________________________<br class="gmail_msg">
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" class="gmail_msg" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br class="gmail_msg">
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" class="gmail_msg" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br class="gmail_msg">
</blockquote></div></div><div dir="ltr">-- <br></div><div data-smartmail="gmail_signature"><div dir="ltr"><div>‘[A] talent for following the ways of yesterday, is not sufficient to improve the world of today.’</div><div> - King Wu-Ling, ruler of the Zhao state in northern China, 307 BC</div></div></div>