[Beowulf] Some beginner's questions on cluster setup
Carsten Aulbert
carsten.aulbert at aei.mpg.de
Wed Jul 8 22:47:01 PDT 2009
Hi
P.R. wrote:
> Im planning on building a small 20+node cluster, and I have some basic
> questions.
> We're planning on running 5-6 motherboards with quad-core amd 3.0GHz
> phenoms, and 4GB of RAM per node.
> Off the bat, does this sound like a reasonable setup
>
I guess that fully depends on what you want to accomplish. If you want
to use it as a proof of concept design or as a cluster for smaller tasks
I think it looks reasonable. If you wanted to render IceAge4 with it, I
think you need more power ;)
> My first question is about node file&operating systems:
> I'd like to go with a diskless setup, preferably using an NFS root for each
> node.
> However, based on some of the testing Ive done, running the nodes off of the
> NFS share(s) has turned out to be rather slow & quirky.
> Our master node will be running on a completely different hardware setup
> than the slaves, so I *believe* it will make it more complicated & tedious
> to setup&update the nfsroots for all of the nodes (since its not simply a
> matter of 'cloning' the master's setup&config).
> Is there any truth to this, am I way off?
>
5-6 boxes off NFS root should not be a large burden to the server, as
long as it has decent disk speeds (small RAID perhaps) and plenty of
memory for caching (couple of GB should be sufficient to start with).
Try tuning the NFS parameters to suit your needs.
> Can anyone provide any general advice or feedback on how to best setup a
> diskless node?
Not really, we do that only during the installation phase.
>
>
> The alternative that I was considering was using (4GB?) USB flash drives to
> drive a full-blown,local OS install on each node...
> Q: does anyone have experience running a node off of a usb flash drive?
> If so, what are some of the pros/cons/issues associated with this type of
> setup?
>
We do that only rarely for rescue setups as our nodes don't have CD
drives and I think USB flash drives are pretty slow still.
>
> My next question(s) is regarding network setup.
> Each motherboard has an integrated gigabit nic.
>
> Q: should we be running 2 gigabit NICs per motherboard instead of one?
> Is there a 'rule-of-thumb' when it comes to sizing the network requirements?
> (i.e.,'one NIC per 1-2 processor cores'...)
>
Again, that all depends on your workload and jobs. I think no-one can
help you there unless you know what the workload will be.
>
> Also, we were planning on plugging EVERYTHING into one big (unmanaged)
> gigabit switch.
> However, I read somewhere on the net where another cluster was physically
> separating NFS & MPI traffic on two separate gigabit switches.
> Any thoughts as to whether we should implement two switches, or should we be
> ok with only 1 switch?
>
Well again it depends what you need. I'd start first of with a single
switch and see of NFS traffic is killing your MPI performance. On larger
sites storage and intercommunication networks are often separated as the
might interfere with each other too much, but it will boil down to the
question how much money you have. 2 8-port GBit switches are cheap
enough for testing, 2 1000+ Gbit switches (or Infiniband,...) are not ;)
>
> Notes:
> The application we'll be running is NOAA's wavewatch3, in case anyone has
> any experience with it.
> It will utilize a fair amount of NFS traffic (each node must read a common
> set of data at periodic intervals),
> and I *believe* that the MPI traffic is not extremely heavy or constant
> (i.e., nodes do large amounts of independent processing before sending
> results back to master).
>
With the small number of machines I would go with 2 8 port switches and
see what happens. No idea how wavewatch3 works and what it needs, sorry.
>
> Id appreciate any help or feedback anyone would be willing&able to offer...
>
I hope my reply already helps a little bit.
Cheers
Carsten
More information about the Beowulf
mailing list