undergrad senior project idea, help
Mark Hahn
hahn at physics.mcmaster.ca
Fri Sep 6 14:06:35 PDT 2002
> Thanks for the details, really informative. Do you know of any easy way to
> measure or determine the power drawn by a system besides calculating each
> component's?
summing component power is really only useful for figuring worst-case power,
and will always be rather approximate. I talked my local electrician into
measuring a representative dual-xeon node: dual prestonia 2.0, Intel WV e7500
board, two 1G dimms, single 9G 10K rpm SCSI disk. roughly we saw:
powerup/spinup 130
idle 60
active 110 (make -j in kernel sources, cold cache)
do do this, he tossed together a sort of extension cord that had a plug,
three separate power/neutral wires, and an outlet. his clamp-on rms
amp-meter gave us the numbers - current through any one of the wires.
I was actually expecting higher numbers because a friend of mine reported
around 250W peak from a dual-athlon. he might well have done a better job
of simulating peak load, though: floating point, for instance.
> Also, I notice that you and Mark Hahn are from the Physics
> area. Is there a particular set of problems within Physics that you use
> for your Beowulf or a website that might have some info? I really enjoy
> working with the Physics dept. here so I thought I would ask.
I'm attached to Physics mainly for practical reasons - the project was
pushed locally mainly by a Physics faculty member who does astro simulations.
most of our cycles are currently consumed by chem and bio people though.
it's somewhat interesting that these different communities of users have
different hardware needs. the astro people have codes that want quite
a lot of ram, lots of dram bandwidth, and low-latency, high-bandwidth
interconnect. these are physically-based simulations where quite a lot
of state in a model needs to be globally communicated each timestep.
stats/bio and some math types tend to be more into "embarassingly parallel" -
things like montecarlo simulations and genetic stuff are sampling and
search problems where little global communication is necessary,
and problems can often be scaled without loss to fit any number of cpus.
their simulations tend to also not use much ram - maybe 10M per CPU
versus 1G/CPU for astro. embarassingly-parallel codes would probably
also be happy with a big pile of teraherz uniprocessors networked with
a combination of USB and linguini.
the chem people mostly seem to run big, old applications like Gaussian98,
and don't often use many processors (mostly 1 or 4, and generally SMP).
it seems like the chemists either use modest amounts of ram (50M),
or else want 50G (in which case they wind up doing astonishing amounts
of disk IO instead, of course).
anyway, we're about to add O(50) more nodes of the 100bT/dual-xeon
variety for the folks who don't really take advantage of the spiffy
Quadrics interconnect.
if you're an astro/math/bio/chem person, please don't take offense
at any of these overgeneralizations ;)
regards, mark hahn.
More information about the Beowulf
mailing list