[Beowulf] Re: how large of an installation have people used NFS with? would 300 mounts kill performance?

Joe Landman landman at scalableinformatics.com
Thu Sep 10 09:18:00 PDT 2009


Rahul Nabar wrote:

>> Luster or eventually pNFS if things get ugly.  But not all NFS servers are
>> created equal, and a solid purpose built appliance may handle loads a
>> general purpose linux NFS server won't.
> 
> Disk array connected to generic Linux server? Or standalone
> Fileserver? Reccomendations?

At least one company on this list sells some nice fast storage boxen.  I 
am biased of course, as I work there ...

> What exactly does a "solid purpose built appliance" offer that a
> Generic Linux server (well configured) connected to an array of disks
> does not offer?

"It depends".  Your off the shelf Linux servers aren't very well 
designed for high performance file service.  You would either need to go 
to a special purpose built server, or the pure purpose-built appliance 
boxen.  The latter often have some additional features you may or may 
not find useful, at a price you may or may not be willing to pay for. 
The former, depending upon whom you speak with, will provide excellent 
performance for reasonable prices on your use case.

>> The bottleneck is more likely the File-server's Nic and/or it's Back-end
>> storage performance.  If the file-server is 1GbE attached then having a
>> strong network won't help NFS all that much.  10GbE attached will keep up
>> with a fair number of raided disks on the back-end.  Load the NFS server up
>> with a lot of RAM and you could keep a lot of nodes happy if they are
>> reading a common set of files in parallel.
> 
> Yup; I'm going for at least 24 GB RAM and twin 10 GigE cards
> connecting the file server to the switch.

FWIW: I didn't post it to this list when we did this, but we had a 
single client and server show a 1 GB/s (954 MB/s really, I rounded up) 
over a single single-mode fibre running NFS.


"Who says you can’t do Gigabyte per second NFS?

I keep hearing this. Its not true though. See below.

NFS client: Scalable Informatics Delta-V (ΔV) 4 unit
NFS server: Scalable Informatics JackRabbit 4 unit.
(you can buy these units today from Scalable Informatics and its partners)
10GbE: single XFP fibre between two 10GbE NICs.

This is NOT a clustered NFS result.

root at dv4:~# mount | grep data2
10.1.3.1:/data on /data2 type nfs 
(rw,intr,rsize=262144,wsize=262144,tcp,addr=10.1.3.1)

root at dv4:~# mpirun -np 4 ./io-bm.exe -n 32 -f /data2/test/file -r -d  -v
N=32 gigabytes will be written in total
each thread will output 8.000 gigabytes
page size                     ... 4096 bytes
number of elements per buffer ... 2097152
number of buffers per file    ... 512
Thread=3: time = 33.665s IO bandwidth = 243.337 MB/s
Thread=2: time = 33.910s IO bandwidth = 241.580 MB/s
Thread=1: time = 34.262s IO bandwidth = 239.101 MB/s
Thread=0: time = 34.244s IO bandwidth = 239.226 MB/s
Naive linear bandwidth summation = 963.244 MB/s
More precise calculation of Bandwidth = 956.404 MB/s
"

The machine running the code has 8GB of RAM, so writing 32 GB is far 
outside of its cache.  The remote system, (the 10.1.3.1 unit) has a 
native local disk performance of about 1.6/2.0 GB/s read/write.

So yes, with the right system, you can get a nice bit of performance out 
of it.

-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman at scalableinformatics.com
web  : http://scalableinformatics.com
        http://scalableinformatics.com/jackrabbit
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615



More information about the Beowulf mailing list