[Beowulf] Newbie Question: Racks versus boxes and good rack solutions for commodity hardware

Robert G. Brown rgb at phy.duke.edu
Sun Dec 14 09:10:47 PST 2008


On Sat, 13 Dec 2008, arjuna wrote:

> A simple question though...Aluminum plates are used because aluminum is does
> not conduct electricity. Is this correct?

Aluminum is an EXCELLENT conductor of electricity, one of the best!
Basically all metals conduct electricity.  When you mount the
motherboards you MUST take care to use spacers in the right places
(under the holes for mounting screws on the motherboards, usually) to
keep the solder traces of the motherboard from shorting out!

Your question makes me very worried on your behalf.  Electricity is
quite dangerous, and in general messing with it should be avoided by
anyone that does not already know things like this.  In India, with 240
VAC as standard power, this is especially true.  True, the power
supplied to the motherboards is in several voltages 12V and under, but
believe it or not you can kill yourself with 12V, and starting a fire
with 12V is even easier.

I would >>strongly<< suggest that you find a friend with some electrical
engineering experience, or read extensively on electricity and
electrical safety before attempting any sort of motherboard mount.
Mark's suggestion of hot melt glue, for example, is predicated on your
PRESUMED knowledge that cookie sheets or aluminum sheets are
conductors, that the motherboard has many traces carrying current, and
that when you mount the motherboard you must take great care to ensure
that current-carrying traces CANNOT come in contact with metal.

The reasons aluminum plates are suggested are a) it's cheap; b) it's
easily drilled/tapped for screws; c) it's fireproof AS LONG AS YOU DON'T
GET IT TOO HOT (heaven help you if you ever do start it on fire, as it
then burns like thermite -- oh wait, thermite IS aluminum plus iron
oxide); d) it reflects/traps EM radiation.

Wood would be just as good except for the fireproof bit (a big one,
though -- don't use wood) and the EM reflecting part.

The aluminum plates should probably all be grounded back to a common
ground.  The common ground should NOT be a current carrying neutral --
I'm not an expert on 240 VAC as distributed in India and hesitate to
advise you on where/how to safely ground them.  You should probably read
about "ground loops" before you mess with any of this.

Seriously, this is dangerous and you can hurt yourself or others if you
don't know what you are doing.  You need to take the time to learn to
the point where you KNOW how electricity works and what a conductor is
vs an insulator and what electrical codes are and WHY they are what they
are before you attempt to work with bare motherboards and power
supplies.  It is possible to kill yourself with a nine volt transistor
radio battery (believe it or not) although you have to work a bit to do
so.  It is a lot easier with 12V, and even if you don't start a fire,
you will almost certainly blow your motherboard/CPU/memory and power
supply if you short out 12V in the wrong place.

> Also for future reference, I saw a reference to dc-dc converters for power
> supply. Is it possible to use motherboards that do not guzzle electricity
> and generate a lot of heat and are yet powerful. It seems that not much more
> is needed that motherboards, CPUs, memory, harddrives and an ethernet card.
> For a low energy system, has any one explored ultra low energy consuming and
> heat generating power solutions that maybe use low wattage DC?

The minimum power requirements are dictated by your choice of
motherboard, CPU, memory, and peripherals.  Period.  They require
several voltages to be delivered into standardized connectors from a
supply capable of providing sufficient power at those voltages.  Again,
it is clear from your question that you don't understand what power is
or the thermodynamics of supplying it, and you should work on learning
this (where GIYF).  As I noted in a previous reply, typical motherboard
draws are going to be in the 100W to 300+W loaded, and either you
provide this or the system fails to work.  To provide 100W to the
motherboard, your power supply will need to draw 20-40% more than this,
lost in the conversion from 120 VAC or 240 VAC to the power provided to
the motherboard and peripherals.  Again, you have no choice here.

The places you do have a choice are:

   a) Buying motherboards etc with lower power requirements.  If you are
using recycled systems, you use what you've got, but when you buy in the
future you have some choice here.  However, you need to be aware of what
you are optimizing!  One way to save power is to run at lower clock, for
example -- there is a tradeoff between power drawn and speed.  But
slower systems just mean you draw lower power for longer, and you may
well pay about the same for the net energy required for a computation!
You need to optimize average draw under load times the time required to
complete a computation, not just "power", weighted with how fast you
want your computations to complete and your budget.

   b) You have a LIMITED amount of choice in power supplies.  That's the
20-40% indicated above.  A cheap power supply or one that is incorrectly
sized relative to the load is more likely to waste a lot of power as
heat operating at baseline and be on the high end of the power draw
required to operate a motherboard (relatively inefficient).  A more
expensive one (correctly sized for the application) will waste less
energy as heat providing the NECESSARY power for your system.

That is, you don't have a lot of choice when getting started -- you're
probably best off just taking the power supplies out of the tower cases
of your existing systems and using them (or better, just using a small
stack of towers without remounting them until you see how clustering
works for you, which is safe AND effective).  When you have done some
more research and learned about electricity, power supplies, and so on
using a mix of Google/web, books, and maybe a friend who works with
electricity and is familiar with power distribution and code
requirements (if any) in New Delhi, THEN on your SECOND pass you can
move on to a racked cluster with custom power supplies matched to
specific "efficient" motherboards.

    rgb

> 
> On Sat, Dec 13, 2008 at 8:50 AM, Mark Hahn <hahn at mcmaster.ca> wrote:
>             What is 1u?
> 
>
>       rack-mounted hardware is measured in units called "units" ;)
>       1U means 1 rack unit: roughly 19" wide and 1.75" high.  racks
>       are all
>       the same width, and rackmount unit consumes some number of units
>       in height.
>       (rack depth is moderately variable.)  (a full rack is generally
>       42").
>
>       a 1U server is a basic cluster building block - pretty well
>       suited,
>       since it's not much taller than a disk, and fits a motherboard
>       pretty nicely (clearance for dimms if designed properly, a
>       couple optional cards, passive CPU heatsinks.)
>
>             What is a blade system?
> 
> 
> it is a computer design that emphasizes an enclosure and fastening
> mechanism
> that firmly locks buyers into a particular vendor's high-margin line
> ;)
> 
> in theory, the idea is to factor a traditional server into separate
> components, such as shared power supply, unified management, and often
> some semi-integrated network/san infrastructure.  one of the main
> original
> selling points was power management: that a blade enclosure would have
> fewer, more fully loaded, more efficnet PSUs.  and/or more reliable.
> blades are often claimed to have superior managability.  both of these
> factors are very, very arguable, since it's now routine for 1U servers
> to have nearly the same PSU efficiency, for instance.  and in reality,
> simple managability interfaces like IPMI are far better (scalably
> scriptable)
> than a too-smart gui per enclosure, especially if you have 100
> enclosures...
>
>       goes into a good rack in terms of size and matieral
>       (assuming it has to be
>       insulated)
> 
> 
> ignoring proprietary crap, MB sizes are quite standardized.  and since
> 10 million random computer shops put them together, they're incredibly
> forgiving when it comes to mounting, etc.  I'd recommend just
> glue-gunning
> stuff into place, and not worring too much.
>
>       Anyone using clusters for animation on this list?
> 
> 
> not much, I think.  this list is mainly "using commodity clusters to
> do stuff fairly reminiscent of traditional scientific supercomputing".
> 
> animation is, in HPC terms, embarassingly parallel and often quite
> IO-intensive.  both those are somewhat derogatory.  all you need to do
> an animation farm is some storage, a network, nodes and probably a
> scheduler or at least task queue-er.
> 
> 
> 
> 
> --
> Best regards,
> arjuna
> http://www.brahmaforces.com
> 
>

Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu



More information about the Beowulf mailing list