Beowulf and variable cpus
Fitch, Chester
Chester.Fitch at mdx.com
Thu Sep 21 10:33:07 PDT 2000
As noted previously on this list, this all depends on your application. If
your application is tightly coupled, then yes, you will be limited by the
slowest compute node (unless you do some fairly sophisticated node profiling
and load balancing.)
One the other hand, if your application allows it, building a heterogeneous
cluster is a good way to put those old machines to productive work. MY
application, for example, is embarrassingly parallel - I'm running many,
many Monte Carlo simulations - each with differing input parameters. Each
simulation runs to completion on a single machine, and there is no processor
to processor communication required (other than between the Head and Compute
nodes). Therefore, I am able to utilize old hardware - what does it matter
(to the overall problem) if one node gets through 2 or 3 simulations in the
time it takes the slowest node to finish? My system is very small - only 7
compute nodes - consisting of 2 386 machines, 1 486 and 4 Pentiums (of
differing speeds), but I have been getting very good throughput,
considering...
Obviously, YMMV...
Just my $0.02
Chet
> -----Original Message-----
> From: Jag [mailto:agrajag at linuxpower.org]
> Sent: Thursday, September 21, 2000 11:01 AM
> To: p.grimshaw at virgin.net
> Cc: beowulf at beowulf.org
> Subject: Re: Beowulf and variable cpus
>
>
> On Thu, 21 Sep 2000, p.grimshaw at virgin.net wrote:
>
> >
> > Hi, I am new to Beowulf and have some questions,
> >
> > 1. Does anyone know if I am able to run a beowulf cluster with
> > different types of clients, i.e I have a load of pentium 100s
> > and some p2 500s which I would like to use together. Is this
> > possible?
>
> This is possible, but not really recommended. There's no real way to
> ensure a certain part of a job gets run on a certain node, so you have
> to assume all nodes are equal. But if you do this and have unequal
> nodes, you'll find your jobs processing at the rate of the slowest
> nodes.
>
> I'd recommend just putting the 500's in the cluster, and if you really
> need the extra power, put the 100's into a seperate cluster.
>
> Jag
>
>
More information about the Beowulf
mailing list