[Beowulf] DC Power Dist. Yields 20%

Mark Hahn hahn at physics.mcmaster.ca
Sat Aug 12 18:22:50 PDT 2006


>> only contain 2 of 14 fans (or of 18 moving parts).  I don't know whether
>> there's a reason to think many small AC-DC PSU's would be less efficient
>> than a couple really big ones (factoring in the cost and inefficiency of
>> DC power distribution).
> Do your PDUs receive 220 VAC or ~400 VAC?

I think everything is 220 - 65 KVA ups and harmonic-mitigators (liebert,
100KVA-ish PDU-like things) are all 3-phase.  then normal 30A 220 circuits
to three to a rack, there split into 8-socket power bars with ~3 nodes 
in each.  lots of pointless plugs, etc.  frightening number of separate
wires, and I'm not completely sure the neutrals would meet RGB's standards
(but the PDUs claim negligable N current, perhaps because the in-node 
PSU's are all PFC?)

anyway, the power infrastructure for this ~900-node machineroom is 
not something I'm proud of, in spite of it being relatively new.
it sure seems like better building blocks would make life easier.

at the time, 30A 220 seemed like the definite winner - maybe we 
should have insisted on putting, say, 4 of them in a box on 
teck table (flex), to optimize the ease of balancing power, phases, etc.
we went with wiremold instead, which sounded OK at the time, since 
we didn't think we'd have to be changing things as much...

> They're talking about a 380 VDC distribution grid. On it's face, this
> infrastructure would be at least as efficient as a supply of 270 VAC.

so it's only at very long distances that AC has an efficiency lead?
also, if distributing 380 DC, how hefty is the infrastructure for stepping it
down to something reasonable within a rack?  I'd assume no more than ~48V
would go to a node.

>> I'd certainly be interested in a distribution system (whether AC or DC)
>> that avoided so damn many plugs and sockets and breakers and PDUs.
>> I guess I'm more enthused about servers becoming lower-powered, and also
>> quite interested in better ways to dissipate the heat than raised floors
>> and traditional chillers...
>
> Heh heh. Water cooled racks?

somehow, fluid cooling does seem attractive.  if it can be done without
introducing ~2 fittings per node, I can imagine it being a huge win.
for instance, I'm personally not attached to the traditional mechanism
of sheetmetal-enclosed boxes with rails on the sides.  oh, they work,
and are somewhat COTS/compatible/etc.  but we bought our racks from the 
same vendor who made the nodes, and that's not uncommon.

suppose instead the motherboard was mounted on not much more than a tray,
and it slid onto a heatsink that was rigidly attached to the rack, and 
hard-piped with some heat transfer agent.  taking a node out would 
involve unplugging the usual cables in the back (power, ipmi, eth, quadrics
in our case) and sliding the node off it's "cold plate".  doing away with 
the 14 nasty little fans per node sounds extremely attractive...
(doing away with the rails wouldn't be so bad either - it's been many years
since I saw anyone service a node by pulling it part-way out, so the rails
have really become an inappropriate design...)

if there was a way to make a robust, flexible heat-pipe-like thing,
that would be great...

> Well, a DC-DC power converter is pretty straight forward. I expect part
> of the trouble with switching power supplies is those high frequency
> mosfets, which you wouldn't have (I expect) with a DC-DC converter.

hmm, I thought the basic design as similar (both switching), since dc-dc
doesn't have a transformer like in the old days.  the switching might 
certainly run at lower frequency...

regards, mark hahn.



More information about the Beowulf mailing list