[Beowulf] [beowulf] nfs vs parallel filesystems

John Hearns hearnsj at gmail.com
Sun Sep 19 08:54:35 UTC 2021


Lohit, good morning.  I work for Dell in the EMEA HPC team.  You make some
interesting observations.
Please ping me offline regarding Isilon.
Regarding NFS we have a brand new Ready Architecture which uses Poweredge
servers and ME series storage (*)
It gets some pretty decent performance and I would honestly say that these
days NFS is a perfectly good fit for small clusters -
the clusters which are used by departments or small/medium sized
engineering companies.

If you want to try out your particular workloads we have labs available.

You then go on to talk about petabytes of data - that is the field where
you have to look at scale out filesystems.

(*) I cannot find this on public webpages yet, sorry

On Sat, 18 Sept 2021 at 18:21, Lohit Valleru via Beowulf <
beowulf at beowulf.org> wrote:

> Hello Everyone,
>
> I am trying to find answers to an age old question of NFS vs Parallel file
> systems. Specifically - Isilon oneFS vs parallel filesystems.Specifically
> looking for any technical articles or papers that can help me understand
> what exactly will not work on oneFS.
> I understand that at the end - it all depends on workloads.
> But at what capacity of metadata io or a particular io pattern is bad in
> NFS.Would just getting a beefy isilon NFS HDD based storage - resolve
> most of the issues?
> I am trying to find sources that can say that no matter how beefy an NFS
> server can get with HDDs as backed - it will not be as good as parallel
> filesystems for so and so workload.
> If possible - Can anyone point me to experiences or technical papers that
> mention so and so do not work with NFS.
>
> Does it have to be that at the end - i will have to test my workloads
> across both NFS/OneFS and Parallel File systems and then see what would not
> work?
>
> I am concerned that any test case might not be valid, compared to real
> shared workloads where performance might lag once the storage reaches PBs
> in scale and millions of files.
>
> Thank you,
> Lohit
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://beowulf.org/pipermail/beowulf/attachments/20210919/e7773f27/attachment.htm>


More information about the Beowulf mailing list