[Beowulf] recommendations for a good ethernet switch for connecting ~300 compute nodes

Greg Kurtzer gmkurtzer at gmail.com
Thu Sep 3 17:19:33 PDT 2009

On Thu, Sep 3, 2009 at 3:56 PM, Rahul Nabar<rpnabar at gmail.com> wrote:
> On Thu, Sep 3, 2009 at 3:16 PM, Greg Kurtzer<gmkurtzer at gmail.com> wrote:
> Thanks for the comments Greg!

Sure thing. Glad to offer what I can.

>> If you were using Perceus.....
> No. I've never used Perceus before and although it sounds interesting
> this seems like a bad time to try something new!

Unless it makes your job easier as you scale up. ;)

Feel free to check out:


>> The file system needs to be built to handle the load of the apps. 300
>> nodes means you can go from the low end (Linux RAID and NFS) to a
>> higher end NFS solution, or upper end of a parallel file system or
>> maybe even one of each (NFS and parallel) as they solve some different
>> requirements.
> What exactly do you mean by a "parallel" file system? Something like
> GPFS? That's IBM proprietory though isn't it? On the other hand NFS
> seems pretty archaic. I've seen quite a few installations use Lustre.
> I am planning to play with that. Something in the OpenSource world to
> keep costs down.

Yes, GPFS is IBM's commercial file system and Lustre is a free
solution. Both are very complicated components to the cluster that
will take a large investment to do properly (either initial purchase
cost and the hidden cost of administration or just a lot of hidden

If cost is really an issue, *and* if the applications don't require a
parallel file system then why not make your job easier with the use of
a quality Network Attached Storage solution (NAS) and use NFS?

In either case if you look around you can find people that may even
have premade Perceus VNFS capsules (the equivalent of an installer or
preconfigured disk image) for Lustre servers, clients, and various
other system roles.

Best of luck!
Greg Kurtzer

More information about the Beowulf mailing list