No subject

Thu Jun 12 22:07:40 PDT 2014

5.7) when the nfsd instances are increased, the socket input queue
should be increased. For instance, my NFS server runs 48 instances. So
the values of /proc./sys/net/ipv4/tcp_rmem and tcp_wmem (or the old
fashion /proc/sys/net/core/rmem_max and rmem_default) should be changed
a) what is the relation w/ the wsize/rsize NFS mount options? For
instance, if wsize=rsize=8192K, does rmem_default/wmem_default have to
be set to (at least) 48*8192K? Does it have to be proportional to the
number of NFS instances? What is the maximum value of the socket input
b) if all the NFS clients are gigE w/ the proc./sys/net/ipv4/tcp_rmem
and tcp_wmem values set to 262144, how does it affect the values of
those parameters on the NFS server? Should they be set a much smaller
values than the ones on the server side?


More information about the Beowulf mailing list