Building a beowulf with old computers
Robert G. Brown
rgb at phy.duke.edu
Mon Mar 10 07:30:26 PST 2003
On Mon, 10 Mar 2003 ds10025 at cam.ac.uk wrote:
> Thanks for reply.
>
> Will any cluster monitoring works on such a low spec PCs?
xmlsysd's RSS footprint is around 1-2 MB, depending on what is being
monitored, but you're definitely going to have problems running the
kernel, a minimal set of OS adjuncts, a monitoring tool, and parallel
(or any) application in 32 MB.
Your basic problem is going to be that this is just not a whole lot of
memory for modern/current kernels and distributions. I don't know what
the minimum footprint of a system is these days, but at a guess it will
be in the 4-8 MB range, with a bit more required to get the system going
than is actually occupied in operation.
On top of this will be things like pvmd (perhaps 1 MB), a monitoring
daemon, xinetd if you run it, sshd if you run THAT, all sucking down 1-2
MB each, although you can likely run without all of them if you really
have to. Perhaps 12-16 MB of total "minimum" footprint.
What is left is for your applications. Running so many things that any
application swaps is a bad idea. You might therefore be able to run
applications as large as 10-16 MB with a deliberately stripped node
configuration. Things like Scyld or clustermatic may be de-facto
"pre-stripped" (or strippable) to some minimal footprint that might do a
bit better, I don't know. They tend to eliminate a lot of stuff, but
have to replace at least some of it with their own control interface for
launching and managing jobs.
However, memory is >>very<< cheap, right? As in pricewatch.com lists 64
MB EDO for $8 a stick. So for $16 you can put at least 128 MB in each
node (PC 133 SDRAM is comparable in price). This is enough that you
could easily run diskless nodes, you could run largish applications, you
could even run X in a pinch on a node and not necessarily swap. It
would certainly allow you to stop worrying about this dimension of node
capacity.
This is what I meant when I said that you're likely going to have to
invest a bit in your nodes, if they only have 32 MB of memory each and
Pentium or early P5-class CPUs. Put enough memory on them and you can
install a stub operating system (just /, /var, /tmp, /boot) on the small
node disks and use the rest as swap. Mount /usr and /home from a
server. Possibly add a decent (100BT) PCI networking card --
avoid/replace ISA cards if your motherboard and system configuration
permit as they reduce performance.
This way you can run a current linux distribution, run decently large
jobs, and not have to "squeeze" too much or end up spending as much time
working around not-enough memory as I suspect you would have to
otherwise.
Remember, your TIME is a tradeoff here. Even if you spend $30-50 per
node fixing them up, you might save hours and hours screwing around with
shoehorning linux onto them, only to find at the end that they don't
have enough memory for you to be able to DO anything interesting with
them. Even a hobby-level cluster is worth a small investment...
rgb
>
>
> Dan
> At 20:01 09/03/03 -0500, Robert Myers wrote:
> >Robert G. Brown wrote:
> >
> >>The sad truth is that cluster nodes have an ECONOMICALLY useful lifetime
> >>of somewhere between 18 months and 3 years, depending on lots of things,
> >>although one can arguably get work done out to 5 years on nodes that
> >>require no human time to run or repair that other people are paying to
> >>feed and cool.
> >>
> >>
> >That makes a strong argument for considering energy consumption when
> >building a cluster in the first place. Lower energy consumption = Lower
> >energy cost, longer economically useful life = Lower TCO/year.
> >
> >Same argument works for server blades, and I'm amazed that energy costs
> >don't come up as a consideration more often.
> >
> >A researcher at LANL has built a cluster based on Transmeta chips called
> >Green Destiny, making the energy cost argument, which is documented in
> >
> >http://public.lanl.gov/feng/Bladed-Beowulf.pdf
> >
> >He claims a much lower TCO for his Transmeta-based system, but only a
> >small part of the claimed savings is electricity costs.
> >
> >RM
> >
> >RM
> >
> >
> >_______________________________________________
> >Beowulf mailing list, Beowulf at beowulf.org
> >To change your subscription (digest mode or unsubscribe) visit
> >http://www.beowulf.org/mailman/listinfo/beowulf
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
>
--
Robert G. Brown http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb at phy.duke.edu
More information about the Beowulf
mailing list