NFS Performance (was Re: [Beowulf] GPFS on Linux (x86))
Chris Samuel
csamuel at vpac.org
Sat Sep 16 02:40:35 PDT 2006
On Saturday 16 September 2006 12:32 am, Brent Franks wrote:
> Nice, any sort of comparison data in terms of differences in
> throughput achieved?
We weren't as concerned about throughput as the fact that when the NFS server
was under mild load (which the previous RH7.3 box could cope with) it started
to become unresponsive and we were seeing anything from odd application
behaviour (SVN checkouts failing was one of the most bizzare) through to
stale NFS file handles and lots of NFS server not responding messages.
I did try and replace the RHEL3 kernel with the latest kernel.org 2.4 kernel
but found that it wouldn't work because RedHat had backported NPTL and
various other bits from 2.6 that meant that a standard 2.4 broke many of
their userspace apps. :-(
But the main headache is that we believe that ext3 is now single-threaded
through kjournald, and so as that starts backing up under load you see all
your NFS daemons getting stuck in device waits. I was trying everything I
could think to improve performance and at one point had increased the number
of NFS daemons to over 100 to stop the timeouts from occurring. All that
really achieved was getting the load average on the box to above 80..
So we ditched RHEL on that box, moved the stuff that was tied to RHEL onto a
node where it couldn't do any more damage and went to Fedora instead.
This was with RHEL3, but I've been looking at filesystem performance under
RHEL4 for a group who've a cluster running that (I did warn them) and it
doesn't appear to be to have improved much. :-(
> Additionally, are you writing your journal to a different partition?
Nope, but our XFS partitions are created with:
# mkfs.xfs -f -l su=65536 -d agcount=32 /dev/[...]
cheers,
Chris
--
Christopher Samuel - (03)9925 4751 - VPAC Deputy Systems Manager
Victorian Partnership for Advanced Computing http://www.vpac.org/
Bldg 91, 110 Victoria Street, Carlton South, VIC 3053, Australia
More information about the Beowulf
mailing list