[Beowulf] how large of an installation have people used NFS with? would 300 mounts kill performance?

Rahul Nabar rpnabar at gmail.com
Thu Sep 10 08:36:25 PDT 2009

On Wed, Sep 9, 2009 at 2:32 PM, Greg Kurtzer <gmkurtzer at gmail.com> wrote:

> NFS itself doesn't have any hard limits and I have seen clusters well
> over a thousand nodes using it.

Thanks Greg! That is very reassuring to know! :)
I myself had a installation with 256 NFS mounts but these were ancient
clusters which were essentially "groups of single cpu PCs"

The "well over a 1000 node NFS clusters" that Greg refers to: Any
masters of such installations around on this list? If so I'd give an
arm and leg and more to be in touch and grab your tips and comments.
Whenever I mention "300 nodes", "Gigabit Ethernet" and  NFS  in the
same breath people look at me as if I was a madman. :)

> As an aside note, generally the more specialized or non-standard the
> implementation, the more pressure you will put on administration
> costs.

Exactly. Hence I want NFS to keep things simple ergo cheap.

> Keep in mind that the requirements of the system and budget need to
> define the architecture of the system. NFS is a good choice and can be
> suitable for systems much larger then 300 nodes. *BUT* that would
> depend on what you are doing with the cluster, application IO
> requirements, usage patterns, user needs, reliability/uptime goals,
> etc...

I see too many invocations of the "it depends" rule of HPC everywhere I go! :)


More information about the Beowulf mailing list