[Beowulf] network filesystem

Stu Midgley sdm900 at gmail.com
Mon Mar 5 15:17:32 PST 2007


Actuall I run it in production and I'm not a kernel hacker.  We
currently have 6 OSS's with software raid5 to 6 internal SATA disks.
We see about 190MB/s per OSS out of the disks and around 150MB/s via
the dual network interfaces.

I can't think of any benchmark you care to mention that a single
lustre OSS/MDS won't outperform NFS.  Especially, if you configure
your systems to use both NIC's (most motherboards now come with dual
interfaces) and I don't mean trunking the ports.  Just configure
portals to know that it can speak to the OSS's via both nics and it
will handle the rest for you.

Lustre's meta data performance is WAY better than NFS.  I'd almost say
its WAY better than any global FS I've played with.

Certainly, you have to use Lustre kernels... all we do is run Centos
as our clients/servers and then we just grab the pre-build/supported
kernels from CFS.

Its all pretty easy.  The current Lustre V1.4 is very very nice and we
have found it to be very robust.  Nearly all the problems we
experience turn out to be flakey hardware or kernel issues.  Not
Lustre at all.

You can also checkout the FUSE implementation of a Lustre client I
posted to CFS's website a few weeks back

https://mail.clusterfs.com/wikis/lustre/fuse

while it needs a LOT of work to give decent performance, it does work.
 Oh, and if someone ports liblustre to macosx, I could also run it on
my mac :)

Stu.

>
> How much do you use Lustre?  Yes, you can get that bandwidth,
> but if you code doesn't do large-streaming I/O, you performance
> will be worse than NFS.  Also, I would like to hear someone
> speakup that uses Lustre in a PRODUCTION environment that
> doesn't have a kernel hacker on staff.
>
> Also, Lustre metadata doesn't scale (yet).  You can add
> another server, but that won't improve the metadata.
>
> Using Lustre also requires you to re-patch your kernel every security
> update, then get the bugs out again.
>
> Lustre is the right answer for some, but if you aren't going
> to have that many compute nodes.  It doesn't sound like
> it here.
>
> Craig

-- 
Dr Stuart Midgley
sdm900 at gmail.com



More information about the Beowulf mailing list