high physical density cluster design - power/heat/rf questions
szii at sziisoft.com
szii at sziisoft.com
Mon Mar 5 23:29:19 PST 2001
We were pondering the exact same questions ourselves about a month
ago and while our project is on hold, here's what we came up with...
Mounting: Plexiglass/plastic was our choice as well. Strong, cheap,
and can be metal-reinforced if needed.
We were going to orient the boards on their sides, stacked 2 deep. At
a height of 3" per board, you can get about 5 comfortably (6 if you try)
into a stock 19" rack. They can also slide out this way. Theoretically
you can get 10-12 boards in 5-6U (not counting powersupplies or hard drives)
and depending on board orientation. We were looking at ABIT VP6 boards.
They're cheap, they're DUAL CPU boards, and they're FC-PGA so they're thin.
20-24 CPUs in 5-6U. *drool* If AMD ever gets around to their dual boards,
those will rock as well.
For powersupplies and HA, we were going to use "lab" power supplies
and run a diode array to keep them from fighting too much.
Instead of x-smaller supplies, you can use 4-5 larger supplies and run them
into a common harness to supply power. You'll need 3.3v, 5v, 12v supplies,
but it beats running 24 serparate supplies (IMHO) and if one dies, you don't
lose the board, you just take a drop in supply until you replace it.
For heat dissapation, we're in a CoLo facility. Since getting to/from the
individual video/network/mouse/keyboard/etc stuff is very rare (hopefully)
once it's up, we were going to put a pair of box-fans (wind tunnel style)
in front and behind the box. =) In a CoLo, noise is not an issue.
Depending
on exact design, you might even get away with dropping the fans off of the
individual boards and letting the windtunnel do that part, but that's got
problems if the tunnel dies and affects every processor in the box.
I'm not an EE guy, so the power-supply issue is being handled by someone
else. I'll field whatever questions I can, and pass on what I cannot.
If you even wander down an isle and see a semi-transparent blue piece
of plexiglass with a bunch of surfboards on it, you'll know what
it is - the Surfmetro "Box O' Boards."
Does anyone have a better way to do it? Always room for improvement...
-Mike
----- Original Message -----
From: Velocet <mathboy at velocet.ca>
To: <beowulf at beowulf.org>
Sent: Monday, March 05, 2001 10:13 PM
Subject: high physical density cluster design - power/heat/rf questions
> I have some questions about a cluster we're designing. We really need
> a relatively high density configuration here, in terms of floor space.
>
> To be able to do this I have found out pricing on some socket A boards
with
> onboard NICs and video (dont need video though). We arent doing anything
> massively parallel right now (just running Gaussian/Jaguar/MPQC
calculations)
> so we dont need major bandwidth.* We're booting with root filesystem over
> NFS on these boards. Havent decided on FreeBSD or Linux yet. (This email
> isnt about software config, but feel free to ask questions).
>
> (* even with NFS disk we're looking at using MFS on freebsd (or possibly
> the new md system) or the new nbd on linux or equivalent for gaussian's
> scratch files - oodles faster than disk, and in our case, with no
> disk, it writes across the network only when required. Various tricks
> we can do here.)
>
> The boards we're using are PC Chip M810 boards (www.pcchips.com). Linux
seems
> fine with the NIC on board (SiS chip of some kind - Ben LaHaise of redhat
is
> working with me on some of the design and has been testing it for Linux, I
> have yet to play with freebsd on it).
>
> The configuration we're looking at to achieve high physical density is
> something like this:
>
> NIC and Video connectors
> /
> ------------=-------------- board upside down
> | cpu | = | RAM |
> |-----| |_________|
> |hsync|
> | | --fan--
> --fan-- | |
> _________ |hsync|
> | | |-----|
> | RAM | = | cpu |
> -------------=------------- board right side up
>
> as you can see the boards kind of mesh together to take up less space. At
> micro ATX factor (9.25" I think per side) and about 2.5 or 3" high for the
> CPU+Sync+fan (tallest) and 1" tall for the ram or less, I can stack two of
> these into 7" (4U). At 9.25" per side, 2 wide inside a cabinet gives me 4
> boards per 4U in a standard 24" rack footprint. If I go 2 deep as well (ie
2x2
> config), then for every 4U I can get 16 boards in.
>
> The cost for this is amazing, some $405 CDN right now for Duron 800s with
> 128Mb of RAM each without the power supply (see below; standard ATX power
is
> $30 CDN/machine). For $30000 you can get a large ass-load of machines ;)
>
> Obviously this is pretty ambitious. I heard talk of some people doing
> something like this, with the same physical confirguration and cabinet
> construction, on the list. Wondering what your experiences have been.
>
>
> Problem 1
> """""""""
> The problem is in the diagram above, the upside down board has another
board
> .5" above it - are these two boards going to leak RF like mad and
interefere
> with eachothers' operations? I assume there's not much to do there but to
put
> a layer of grounded (to the cabinet) metal in between. This will drive up
the
> cabinet construction costs. I'd rather avoid this if possible.
>
> Our original construction was going to be copper pipe and plexiglass
sheeting,
> but we're not sure that this will be viable for something that could be
rather
> tall in our future revisions of our model. Then again, copper pipe can be
> bolted to our (cement) ceiling and floor for support.
>
> For a small model that Ben LaHaise built, check the pix at
> http://trooper.velocet.ca/~mathboy/giocomms/images
>
> Its quick a hack, try not to laugh. It does engender the 'do it damn
cheap'
> mentality we're operating with here.
>
> The boards are designed to slide out the front once the power and network
> are disconnected.
>
> An alternate construction we're considering is sheet metal cutting and
> folding, but at much higher cost.
>
>
> Problem 2 - Heat Dissipation
> """"""""""""""""""""""""""""
> The other problem we're going to have is heat. We're going to need to
build
> our cabinet such that its relatively sealed, except at front, so we can
get
> some coherent airflow in between boards. I am thinking we're going to need
to
> mount extra fans on the back (this is going to make the 2x2 design a bit
more
> tricky, but at only 64 odd machines we can go with 2x1 config instead, 2
> stacks of 32, just 16U high). I dont know what you can suggest here, its
all
> going to depend on physical configuration. The machine is housed in a
proper
> environment (Datavaults.com's facilities, where I work :) thats climate
> controlled, but the inside of the cabinet will still need massive airflow,
> even with the room at 68F.
>
>
> Problem 3 - Power
> """""""""""""""""
> The power density here is going to be high. I need to mount 64 power
supplies
> in close proximity to the boards, another reason I might need to maintain
> the 2x1 instead of 2x2 design. (2x1 allows easier access too).
>
> We dont really wanna pull that many power outlets into the room - I dont
know
> what a diskless Duron800 board with 256Mb or 512Mb ram will use, though I
> guess around .75 to 1 A. Im gonna need 3 or 4 full circuits in the room
(not
> too bad actually). However thats alot of weight on the cabinet to hold 60
odd
> power supplies, not to mention the weight of the cables themselves
weighing
> down on it, and a huge mess of them to boot.
>
> I am wondering if someone has a reliable way of wiring together multiple
> boards per power supply? Whats the max density per supply? Can we
> go with redundant power supplies, like N+1? We dont need that much
> reliability (jobs are short, run on one machine and can be restarted
> elsewhere), but I am really looking for something thats going to
> reduce the cabling.
>
> As well, I am hoping there is some economy of power converted here -
> a big supply will hopefully convert power for multiple boards more
> efficiently than a single supply per board. However, as always, the
> main concern is cost.
>
> Any help or ideas are appreciated.
>
> /kc
> --
> Ken Chase, math at velocet.ca * Velocet Communications Inc. * Toronto,
CANADA
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf
mailing list