Replacing NFS.
Rob Latham
rlatham at plogic.com
Mon Apr 9 17:44:57 PDT 2001
On Mon, Apr 09, 2001 at 05:20:58PM -0400, Georgia Southern Beowulf Cluster Project wrote:
> Hello,
>
> I'm trying to replace NFS on my cluster and I'm running into a dead end.
> I'm using 15 diskless nodes with a single master node.
of course, it depends on your situation, but aren't you pretty much
running a hardware environment begging for Scyld?
> Also, does anyone have experience with Coda and is it pretty much a
> drop-in replacement for NFS?
please speak up otherwise, coda fans, but it seems to be in a
perpetual state of devleoment. i usually check the option when
building kernels at home. not that i've actually used it, though :>
> I've also heard of AFS, but it seems to be a
> bit limited.
limited how? if anything it's overkill, especially since you won't
have a local disk for cache ( non-broken caching is one of the best
reasons to use AFS in a cluster of workstations )
> Also, Global File System (GFS), but it requires special
> hardware.
not strictly true. you can use the network block device to do a
poorly-performing GFS setup. You'll need kernel patches and utilities
from sistina.com ( or you can try out the latest plogic kernel, though
i'm not done testing it just yet )
> PVFS (Parallel Virtual File System) seems like the most logical,
> but it only appears to be a kernel module and not a static compile.
again, not /strictly/ true, though yes if you want a VFS-like
interface to pvfs you'll need the kernel module. I think PVFS for
your needs is very *il*logical. PVFS is great for making use of the
extra space on compute node disks in a *parallel* file system. since
you have no disks, you (obviously ) have no extra space, nor could
you really make use of the P in PVFS.
> Any words of wisdom or experience? All help will be appreciated.
summary: use scyld. (and i'm sure the scyld guys would love to hear
why you can't :> )
==rob
--
[ Rob Latham <rlatham at plogic.com> Developer, Admin, Alchemist ]
[ Paralogic Inc. - www.plogic.com ]
[ ]
[ EAE8 DE90 85BB 526F 3181 1FCF 51C4 B6CB 08CC 0897 ]
More information about the Beowulf
mailing list