<div dir="auto">I would look at BeeGFS here </div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, 10 Aug 2023, 20:19 leo camilo, <<a href="mailto:lhcamilo@gmail.com">lhcamilo@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div>Hi everyone, <br><br></div>I was hoping I would seek some sage advice from you guys. <br><br></div>At my department we have build this small prototyping cluster with 5 compute nodes,1 name node and 1 file server. <br><br></div>Up until now, the name node contained the scratch partition, which consisted of 2x4TB HDD, which form an 8 TB striped zfs pool. The pool is shared to all the nodes using nfs. The compute nodes and the name node and compute nodes are connected with both cat6 ethernet net cable and infiniband. Each compute node has 40 cores.<br><br></div>Recently I have attempted to launch computation from each node (40 tasks per node), so 1 computation per node. And the performance was abysmal. I reckon I might have reached the limits of NFS.<br><br></div>I then realised that this was due to very poor performance from NFS. I am not using stateless nodes, so each node has about 200 GB of SSD storage and running directly from there was a lot faster. <br><br></div>So, to solve the issue, I reckon I should replace NFS with something better. I have ordered 2x4TB NVMEs for the new scratch and I was thinking of :<br><br></div><ul><li>using the 2x4TB NVME in a striped ZFS pool and use a single node GlusterFS to replace NFS</li><li>using the 2x4TB NVME with GlusterFS in a distributed arrangement (still single node)</li></ul><div>Some people told me to use lustre,but I reckon that might be overkill. And I would only use a single fileserver machine(1 node).<br><br></div><div>Could you guys give me some sage advice here?<br><br></div><div>Thanks in advance<br></div><div><div><div><div><div><br><br></div></div></div></div></div></div>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank" rel="noreferrer">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" rel="noreferrer noreferrer" target="_blank">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br>
</blockquote></div>