[Beowulf] Newbie Question: Racks versus boxes and good rack solutions for commodity hardware

arjuna brahmaforces at gmail.com
Sat Dec 13 03:48:04 PST 2008


Hello All,

Thank you for your detailed responses. Following your line of thought,
advice and web links, it seems that it is not difficult to build a small
cluster to get started. I explored the photos of the various clusters that
have been posted and it seems quite straightforward.

It seems I have been siezed by a mad inspiration to do this...The line of
thought is t make a 19 inch rack with aluminum plates on which the mother
boards are mounted.

The plan is first to simply create one using the old computers i have...This
can be an experimental one to get going...Thereafter it would make sense to
research the right mother boards, cooling and so on...

It seems that I am going to take the plunge next week and wire these three
computers on a home grown rack...

A simple question though...Aluminum plates are used because aluminum is does
not conduct electricity. Is this correct?

Also for future reference, I saw a reference to dc-dc converters for power
supply. Is it possible to use motherboards that do not guzzle electricity
and generate a lot of heat and are yet powerful. It seems that not much more
is needed that motherboards, CPUs, memory, harddrives and an ethernet card.
For a low energy system, has any one explored ultra low energy consuming and
heat generating power solutions that maybe use low wattage DC?

On Sat, Dec 13, 2008 at 8:50 AM, Mark Hahn <hahn at mcmaster.ca> wrote:

> What is 1u?
>>
>
> rack-mounted hardware is measured in units called "units" ;)
> 1U means 1 rack unit: roughly 19" wide and 1.75" high.  racks are all
> the same width, and rackmount unit consumes some number of units in height.
> (rack depth is moderately variable.)  (a full rack is generally 42").
>
> a 1U server is a basic cluster building block - pretty well suited,
> since it's not much taller than a disk, and fits a motherboard pretty
> nicely (clearance for dimms if designed properly, a couple optional cards,
> passive CPU heatsinks.)
>
>  What is a blade system?
>>
>
> it is a computer design that emphasizes an enclosure and fastening
> mechanism
> that firmly locks buyers into a particular vendor's high-margin line ;)
>
> in theory, the idea is to factor a traditional server into separate
> components, such as shared power supply, unified management, and often
> some semi-integrated network/san infrastructure.  one of the main original
> selling points was power management: that a blade enclosure would have
> fewer, more fully loaded, more efficnet PSUs.  and/or more reliable. blades
> are often claimed to have superior managability.  both of these factors are
> very, very arguable, since it's now routine for 1U servers to have nearly
> the same PSU efficiency, for instance.  and in reality, simple managability
> interfaces like IPMI are far better (scalably scriptable)
> than a too-smart gui per enclosure, especially if you have 100
> enclosures...
>
>  goes into a good rack in terms of size and matieral (assuming it has to be
>> insulated)
>>
>
> ignoring proprietary crap, MB sizes are quite standardized.  and since 10
> million random computer shops put them together, they're incredibly
> forgiving when it comes to mounting, etc.  I'd recommend just glue-gunning
> stuff into place, and not worring too much.
>
>  Anyone using clusters for animation on this list?
>>
>
> not much, I think.  this list is mainly "using commodity clusters to do
> stuff fairly reminiscent of traditional scientific supercomputing".
>
> animation is, in HPC terms, embarassingly parallel and often quite
> IO-intensive.  both those are somewhat derogatory.  all you need to do
> an animation farm is some storage, a network, nodes and probably a
> scheduler or at least task queue-er.
>



-- 
Best regards,
arjuna
http://www.brahmaforces.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20081213/10253b36/attachment.html>


More information about the Beowulf mailing list