[Beowulf] What are people seeing performance-wise for NFS over 10GbE
jan.heichler at gmx.net
Thu Feb 28 13:05:13 PST 2008
Donnerstag, 28. Februar 2008, meintest Du:
>> The best i saw for NFS over 10 GE was about 350-400 MB/s write and about 450 MB/s read.
>> Single server to 8 simultaneous accessing clients (aggregated performance).
The clients had a 1 gig uplink...
JL> Hi Jan:
JL> Ok. Thanks. This is quite helpful.
>> On the blockdevice i got 550 MB/s write and 1.1 GB/s read performance.
JL> Using iSCSI? To real disks or ramdisk/nullio? Most of the benchmarks I
JL> have seen online have been to nullio or ramdisks. We are going to real
Real disks. 16 SAS 15k Disks on a LSI 8888 Controller in RAID-5. Connected through a x4 SAS connection on a backplane. Because of the x4 SAS connection the read rate is limited to 1.1 gig/s - with discrete connections to the drives the read speed should be 50% higher.
I couldn't get a tmpfs exported over NFS - but i did not try very hard on that because it does not make any sense for practical usage - just to find out if NFS itself is the bottleneck.
I tried several configs including software raid-0. The performance was a disaster compared to theoretical values.
>> JL> 22.214.171.124 kernel on both sides, jumbo frames enabled. No switch, just
>> JL> a CX4 cable.
>> rsize/wsize are set to?
JL> I tried a range: 8k through 64k
Okay. That was the most important improvement i could do. The first kernel i used did not allow to go over 8k - with 32k if was much faster.
>> NFS3 or NFS4?
I saw no performance improve with NFS4 i have to say. Everybody i talked to pointed at the bad NFS performance of linux (and many said: use solaris - it is much faster ;-) )
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Beowulf