[Beowulf] Clusters just got more important - AMD's roadmap
Lux, Jim (337C)
james.p.lux at jpl.nasa.gov
Wed Feb 8 06:18:50 PST 2012
On 2/8/12 5:34 AM, "Eugen Leitl" <eugen at leitl.org> wrote:
>On Wed, Feb 08, 2012 at 02:13:49PM +0100, Peter Kjellström wrote:
>
>> * Memory bandwidth to all those FPUs
>
>Memory stacking via TSV is coming. APUs with their very apparent
>memory bottlenecks will accelerate it.
>
>> * Power (CPUs in servers today max out around 120W with GPUs at >250W)
>
>I don't see why you can't integrate APU+memory+heatsink in a
>watercooled module that is plugged into the backplane which
>contains the switched signalling fabric.
I don't know about that.. I don't see the semiconductor companies making
such an integrated widget, so it's basically some sort of integrator that
would do it: like a mobo manufacturer. But I don't think the volume is
there for the traditional mobo types to find it interesting.
So now you're talking about small volume specialized mfrs, like the ones
who sell into the conduction cooled MIL/AERO market. And those are
*expensive*... Not just because of the plethora of requirements and
documentation that the customer wants in that market.. It's all about mfr
volume.
The whole idea of "plugging in" a liquid cooled thing to a backplane is
also sort of unusual. A connector that can carry both high speed digital
signals, power, AND liquid without leaking would be weird. And even if
it's not "one connector", logically, that whole mating surface of the
module is a connector. Reliable liquid connectors usually need some sort
of latching or positive action: a collar that snaps in place (think air
hose) or turns or does something to put a clamping force on an O-ring or
other gasket.
It can be done (and probably has), but it's going to be "exotic" and
expensive.
>
>> Either way we're in for an interesting future (as usual) :-)
>
>I don't see how x86 should make it to exascale. It's too
>bad MRAM/FeRAM/whatever isn't ready for SoC yet.
Even if you put the memory on the chip, you still have the interconnect
scaling problem. Light speed and distance, if nothing else. Putting
everything on a chip just shrinks the problem, but it's just like 15 years
ago with PC tower cases on shelving and Ethernet interconnects.
> Also, Moore
>should end by around 2020 or earlier, and architecture only
>pushes you one or two generations further at most. Don't see
>how 3D integration should be ready by then, and 2.5 D only
>buys you another one or two doublings at best. (TSV stacking
>is obviously off-Moore).
>
More information about the Beowulf
mailing list