[Beowulf] Servers Too Hot? Intel Recommends a Luxurious Oil Bath

Vincent Diepeveen diep at xs4all.nl
Wed Sep 5 05:21:03 PDT 2012


On Sep 5, 2012, at 12:22 PM, Eugen Leitl wrote:

> On Tue, Sep 04, 2012 at 02:54:46PM -0400, Ellis H. Wilson III wrote:
>
>> I know we've been taking things to the uber-scale level with this
>> conversation, but does anyone have suggestions for small (homebrew
>> Beowulf) clusters?  I've considered oil before, but for all the
>
> A major advantage of these forthcoming ARM server systems is that
> they are air-coolable, and in fact even convection-aircoolable,
> if you add a suitable funnel on top of the rack.

For cheap mobile solutions ARM is great.

ARMs are 'hopeless' in IPC.

If there is some sort of decent design then you have to wait 10 years  
until a new great ARM chip is there.

Furthermore the platform compiler used at most HPC is either intel c+ 
+ or GCC.

The fast applications in ARM are simply in assembler and there is a  
reason for that.

Honestely if you're not in assembler you've got nothing to search on  
an ARM if you ask me.

By using GCC on ARM, which is what practical is going to happen, you  
already lose factors compared x64.

Suddenly then ARM loses it in IPC major league.

It's already a weak architecture IPC wise compared to x64.

For HPC these ARM type single chips might to be a disaster once  
again. We had these origin3800 machines from
SGI some years ago with a MIPS type chip (R14k and such), are we  
going to see this disaster once again?

There just wasn't a follow up.

Now i'm not an expert in memory bandwidth, as that's a very special  
expertise, yet with the upcoming
manycore solutions that's going to be pretty important.

Both GPU manufacturers like Tesla from Nvidia, maybe AMD will replace
  their gpgpu staff then they might make a chance there as well,  
right now doing business with AMD
for gpgpu seems rather hopeless because of their total inadequate  
gpgpu support which is 5 years behind
any decent form of planning to get what's needed for those who want  
to build gpgpu.

Intel will probably build something yet it won't outperform Nvidia  
hardware wise. Their Knight* type solutions have
had too much delay to still being serious. Yet their support usually  
is very good and they know how to convince clients
in backrooms.

IBM already has something that's in between a manycore in terms of  
efficiency and having lots
of slow threads and a CPU (referring to the latest BlueGene chip).

Others might get there as well - Fujitsu always has had something for  
Japan and just reading paper
specs their CPU is very capable. Lightyears ahead of any ARM if i may  
say so.

I never understood why fujitsu never offers internationlly what they  
have at a cheap price.

These manycores will need massive bandwidth. Will ARM keep up with that?

There seems to be 3 needs in HPC as far as one can split it out in  
groups as there will be overlaps:

1) A very small group just needs massive RAM. More RAM is better and  
so to speak just a few cores is enough.

2) A rather large group needs heavy integer related branchy codes  
which simply do not vectorize,
and usually there also isn't the time to total optimize all the codes  
there. This is a mix of different sciences
and finance and military.

3) largest group is doing vectorized computing, mainly matrix type  
calculations.

ARM makes 0% chance for 1 and 2.

For group 3, ARM needs massive bandwidth to the RAM and to other  
nodes for the matrix calculations.

There might be possibilities in 3, but i wouldn't call such design  
'ARM'. It is a total new design. Total different and will have nothing
in common with ARM.

Even if some competative ARM machine can get built, Intel/Nvidia/AMD  
and IBM will for sure win it from ARM a year later once again.

Porting your application 2 times to ARM + manycore hardware and then  
to yet another platform, is that a wise thing for an organisation to  
consider buying?



>
> With SSDs, there are no movable parts. Float power will suck,
> but there are GPGPU options there, and memory bandwidth can
> be quite nice with memory cube like die stacking.
>
> Arguably you can run this off battery-buffered 12 or 24 V DC.
>
>> capillary concerns voiced in this list have avoided it.  I would
>> consider a reasonable gas (NOT hydrogen) if one could be suggested  
>> along
>> with a feasible way to keep that gas in a small rack or similar
>> structure, or an alternative to oil if a nicer one (albeit not as
>> efficient) could similarly be suggested.  Perhaps air or piped
>> water-cooling is indeed my best bet.
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin  
> Computing
> To change your subscription (digest mode or unsubscribe) visit  
> http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list