<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<title></title>
</head>
<body>
<br>
David;<br>
<br>
It sounds to me as though you are seeking low power beowuf as a solution.<br>
There have been a few people build such machines, and it is possible to build
a fast, useful beowulf <br>
cluster that uses very little electrical power and has sufficient muscle
to do some serious work.<br>
<br>
I have a small 14 node cluster which I built a year ago. It uses very little
power , and runs so cool that no room air conditioning is needed.<br>
In fact, my p4 machine makes more noise and heat than the beowulf cluster
in my apartment.<br>
<br>
<a class="moz-txt-link-freetext" href="http://mini-itx.com/projects/cluster/ ">http://mini-itx.com/projects/cluster/ </a><br>
<br>
The above link shows the original 12 node configuration.<br>
<br>
At present, there are some motherboards available which give a very nice
combination of cost, performance and low power use. <br>
The trick is to "right-size" everything for your needs and available resources.
The down side is that the small low-power go-fast stuff is a little more
pricey than the plain vanilla pc hardware a beowulf is usually buit from
, but not insanely so.<br>
<br>
Transmetta has some nice boards, and the minit-itx boards are not bad at
all for the cost. Also, there are some rather nice small form factor motherboards
that use AMD's geode cpu. When I start comparing cost, power use, and performance,
so far the most attractive motherboards seem to be the mini-itx boards with
the nemiah core cpu.<br>
However with some low power geode boards now running at up to 1500 MHz, that
may change. The Transmeta boards are probably the fastest of the low power
boards, but the power use per MIPS is not as good as other boards if you
believe the Transmeta printed specifications.<br>
<br>
<br>
Glen<br>
<br>
PS:<br>
<br>
I have also made a few comments below.<br>
<br>
David Mathog wrote:<br>
<blockquote type="cite"
cite="midE1D1uXH-0002X1-00@mendel.bio.caltech.edu">
<pre wrap="">At Wed, 16 Feb 2005 19:08:05 +0100 Vincent Diepeveen wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Date: Wed, 16 Feb 2005 19:08:05 +0100
From: Vincent Diepeveen <a class="moz-txt-link-rfc2396E" href="mailto:diep@xs4all.nl"><diep@xs4all.nl></a>
Subject: Re: [Beowulf] Academic sites: who pays for the electricity?
To: "David Mathog" <a class="moz-txt-link-rfc2396E" href="mailto:mathog@mendel.bio.caltech.edu"><mathog@mendel.bio.caltech.edu></a>,
<a class="moz-txt-link-abbreviated" href="mailto:beowulf@beowulf.org">beowulf@beowulf.org</a>
Message-ID: <a class="moz-txt-link-rfc2396E" href="mailto:3.0.32.20050216190804.0106fcc0@pop.xs4all.nl"><3.0.32.20050216190804.0106fcc0@pop.xs4all.nl></a>
Content-Type: text/plain; charset="us-ascii"
At 08:16 16-2-2005 -0800, David Mathog wrote:
</pre>
<blockquote type="cite">
<pre wrap="">In most universities services like electricity, water, and
A/C are paid for by the school. To do so they take "overhead"
out of every grant. Partially as a consequence of this they
typically have a very poor ability to meter usage on a room
by room basis.
Now somewhere between the 10 node Pentium II beowulf sitting on
a lab bench and the 1000 node dual P4 Xeon beowulf in a machine
room that takes up half the basement the cost of the electricity
(both for power and A/C) goes from a minor expense to a major
one. Really major. For instance, in that hypothetical large machine,
at 10 cents per kilowatt hour (a round number), assuming 100 watts
per CPU (another round number) that's:
</pre>
</blockquote>
</blockquote>
</blockquote>
For a dual p4 xeon machine at full throttle, it comes out to about 250 watts
per node (or a little less) including the network adapters and switching.<br>
<blockquote type="cite"
cite="midE1D1uXH-0002X1-00@mendel.bio.caltech.edu">
<blockquote type="cite">
<blockquote type="cite">
<pre wrap="">
1000 (nodes) *
2 (cpus/node) *
.1 (kilowatts/cpu) *
.1 (dollars/kilowatt-hour) *
365 (days /year) *
24 (hours/day) =
-----------------------
175200 dollars/year
</pre>
</blockquote>
<pre wrap="">Complete academic nonsense calculation. If you use quite some electricity
the electricity gets up to factor 20-40 cheaper. Getting a factor 10
reduction in usage bill is pretty easy if you negotiate properly.
</pre>
</blockquote>
<pre wrap=""><!---->
Well, it isn't complete nonsense, unless you care to dispute the
number of days in a year, hours in a day, or cpus in a dual node
computer!
The only term you're complaining about is the price of
electricity. I'm not privy to the electrical rates that our
school pays, they may well be an order of magnitude lower. My
home rates certainly aren't, but then, I don't buy as much
power as the campus. It's also not at all clear that the
campus would sell power to the end users at the same rate
which it pays the utility.
</pre>
</blockquote>
You are forgetting the cost of cooling the cluster. Big machines make a lot
of heat, and need a lot of cooling.<br>
<br>
<blockquote type="cite"
cite="midE1D1uXH-0002X1-00@mendel.bio.caltech.edu">
<pre wrap="">
I don't really understand your point about keeping the units
running versus restarting them. Sure, it would be really bad
to try to boot all 1000 nodes simultaneously, in all likelihood
it wouldn't work. That's why they are typically started at N
second intervals, where N depends on your hardware.
Surely there is some N large enough so that the peak current
draw during the restart never exceeds the random fluctuations
observed when all units are running normally. Or is your
point that the electricity company doesn't want the facility
to draw _less_ current than it uses normally at
steady state?
</pre>
</blockquote>
It is important to keep the cluster up and running, and only cycle the power
when you must.<br>
The inrush currents at turnon stress components and shorten the life of the
nodes considerably.<br>
Also, thermal cycling puts mechanical stresses on boards and components that
can cause <br>
components and connections to fail.<br>
<br>
In a large cluster that is middle aged (@ 2 years old), you can reasonably
expect to lose a couple of nodes <br>
every time you power down and come back up. After a while, this can be expensive.<br>
Shutting down a big machine is not a trivial thing.<br>
<br>
<blockquote type="cite"
cite="midE1D1uXH-0002X1-00@mendel.bio.caltech.edu">
<pre wrap="">
On a somewhat related note, it would be nice if rack nodes
had some graceful way to conserve electricity. For instance,
something along the lines of: if the CPU utilization goes
below 5% for 10 seconds ratchet the clock down by a factor of 10.
When CPU usage goes above 90% ratchet for 2 seconds move it back
up again. Notebooks can do this sort of thing, but it seems not
to be a "feature" of most full size motherboards. This should
also lower the average temperature in the case, at the expense
of increased thermal cycling. Hard to say off hand if that's
a plus or a minus as far as hardware longevity goes. Certainly
it would be a plus in terms of energy conservation.
</pre>
</blockquote>
<br>
A lot of modern cpu's have the ability to actually shut off unused internal
circuitry.<br>
VIA CPU's, AMD's geode, Transmeta, and some intel cpus have these features.<br>
<br>
<blockquote type="cite"
cite="midE1D1uXH-0002X1-00@mendel.bio.caltech.edu">
<pre wrap="">
Regards,
David Mathog
<a class="moz-txt-link-abbreviated" href="mailto:mathog@caltech.edu">mathog@caltech.edu</a>
Manager, Sequence Analysis Facility, Biology Division, Caltech
_______________________________________________
Beowulf mailing list, <a class="moz-txt-link-abbreviated" href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a>
To change your subscription (digest mode or unsubscribe) visit <a class="moz-txt-link-freetext" href="http://www.beowulf.org/mailman/listinfo/beowulf">http://www.beowulf.org/mailman/listinfo/beowulf</a>
</pre>
</blockquote>
<br>
<pre class="moz-signature" cols="$mailwrapcol">--
Glen E. Gardner, Jr.
AA8C
AMSAT MEMBER 10593
<a class="moz-txt-link-abbreviated" href="mailto:Glen.Gardner@verizon.net">Glen.Gardner@verizon.net</a>
<a class="moz-txt-link-freetext" href="http://members.bellatlantic.net/~vze24qhw/index.html">http://members.bellatlantic.net/~vze24qhw/index.html</a>
</pre>
<br>
</body>
</html>