[Beowulf] Re: how large of an installation have people used NFS with? would 300 mounts kill performance?

Greg Keller Greg at keller.net
Wed Sep 9 13:38:45 PDT 2009

> Date: Wed, 9 Sep 2009 12:40:23 -0500
> From: Rahul Nabar <rpnabar at gmail.com>

> Our new cluster aims to have around 300 compute nodes. I was wondering
> what is the largest setup people have tested NFS with? Any tips or
> comments? There seems no way for me to say if it will scale well or
> not.
"It all depends" -- Anonymous Cluster expert

I routinely run NFS with 300+ nodes, but "it all depends" on the  
applications' IO profiles.  For example, Lot's of nodes reading and  
writing different files in a generically staggered fashion, may not be  
a big deal.  300 nodes writing to the same file at the same time...  
ouch!  If you buy generic enough hardware you can hedge your bet, and  
convert to Gluster or Luster or eventually pNFS if things get ugly.   
But not all NFS servers are created equal, and a solid purpose built  
appliance may handle loads a general purpose linux NFS server won't.

> Assume each of my compute nodes have gigabit ethernet AND I specify
> the switch such that it can handle full line capacity on all ports.
> Will there still be performance hits as I start adding compute nodes?
> Why? Or is it unrealistic to configure a switching setup with full
> line capacities on 300 ports?
The bottleneck is more likely the File-server's Nic and/or it's Back- 
end storage performance.  If the file-server is 1GbE attached then  
having a strong network won't help NFS all that much.  10GbE attached  
will keep up with a fair number of raided disks on the back-end.  Load  
the NFS server up with a lot of RAM and you could keep a lot of nodes  
happy if they are reading a common set of files in parallel.  Until  
you get to parallel FS options, it's hard to imagine the switching  
infrastructure being the bottleneck so long as it supports the 1 or  
10GbE performance from the IO node.

If you expect heavy MPI usage on the Ethernet side, then non-blocking  
and low latency issues become relevant, but for IO it only needs to  
accommodate the slowest link... the IO node.

Hope this helps,

More information about the Beowulf mailing list