Need comments about cluster file systems

Donald Becker becker at
Thu Nov 14 13:06:57 PST 2002

On Thu, 14 Nov 2002 hanzl at wrote:

> > In other words, local disks seems to be sufficient in the beowulf world,
> What surprises me is that I haven't heard of any cluster using local
> disks as persistent cache for files loaded from some central
> repository

That's a very common scenario, and many deployed systems do exactly

> using something like Coda or InterMezzo (for systems where
> local disks are quicker than network card).

Ahhh, you are looking for a Named Subsystem that claims to be a Cluster
File System.  The key to good performance is not exceeding the semantic
requirements of the application by too much, and the best systems are so
transparent to the end user that they don't Need A Capitalized Name.

Our cluster system, for example, uses a specialized whole-file-caching
filesystem internally.  End users don't see it as a file system, but
that's what it is.  We take advantage of the semantics of executable and
libraries to get better performance and caching behavior than using a
general purpose file system.  It's "pluggable" so we can transparently
use customized file transport modes (TCP, multicast, direct Myri or SCI,
even FTP!), or directly use another network file systems.  The special
semantics are that executables and libraries are only replaced with new
versions, never updated in place or extended.  A new versions is a new
file, and the running applications continue to use the old verison.

> (For sure others will point you to PVFS, which IMHO makes sense only
> if network card is quicker than local disk.)

It's frustrating to hear people talk about how wonderful InterMezzo and
Lustre _are_, and dismiss PVFS and GFS.  Software that is not quite
finished is always better and faster than software that already
exists.  It only loses speed and features when reality looms.

Talk about vaporware and deployed systems in different categories unless
you clearly use the future tense.

Donald Becker				becker at
Scyld Computing Corporation
410 Severn Ave. Suite 210		Scyld Beowulf cluster system
Annapolis MD 21403			410-990-9993

More information about the Beowulf mailing list