[Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR
diep at xs4all.nl
Thu Dec 22 08:30:15 PST 2011
On Dec 22, 2011, at 4:42 PM, Prentice Bisbal wrote:
> On 12/22/2011 09:57 AM, Eugen Leitl wrote:
>> On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote:
>>> Or if your German is rusty:
>> Wonder what kind of response will be forthcoming from nVidia,
>> given developments like http://www.theregister.co.uk/2011/11/14/
>> It does seem that x86 is dead, despite good Bulldozer performance
>> in Interlagos
>> (engage dekrautizer of your choice).
> At SC11, it was clear that everyone was looking for ways around the
> power wall. I saw 5 or 6 different booths touting the use of FPGAs for
> improved performance/efficiency.
If you have 1 specific problem other than multiplying massively,
then FPGA's can be fast. They can parallellize a number of sequential
However majority on this list is busy with HPC and majority of HPC
the mutliplication unit bigtime.
You're not gonna beat optimized GPU's with a fpga card when all what
you need is
some multiplications of low number of bits.
Sure some hidden NSA team might have cooked a math processor low
kick butt and can handle big numbers. But what's price of development
of that team?
Can you afford such team?
In such case a FPGA isn't soon gonna beat pricewise a combination of
node with good processor cores with good GPU in the PCI-E 3.0 and
with a network card.
What's price of such node?
Your guess is as good as mine, but it's always going to be cheaper
than a FPGA card,
as so far history has told us those get sold real expensive when they
can do something useful.
Furthermore the cpu and gpu node can run other codes as well and are
cheap to scale in a cluster.
That eats more power, sure, but we all must face that performance
brings more power usage with it
At home this might be difficult to solve, but factories get the power
20x cheaper, especially Nuclear power.
Now this is not a good forum to start an energy debate (again), with
me having the advantage having sut
in an energy commission and then you might be confronted with numbers
a tad different than what you find
on google; yet regrettably it's a fact that average person on this
planet eat s more and more power for each
As for HPC, not too many on this planet are busy with HPC, so you
have to ask yourself, if a simple plastic factory
making a few plastic boards and plastic knifes and plastic forks and
plastic spoons; if a tiny compnay doing that
already eats 7.5 megawatt (actually that's a factory around the
corner here), is it realistic to eat less with HPC?
7.5 megawatt, depending upon what place you try to get the power, is
doing around 0.4 cents per kilowatt hour.
With prices like that. using 7.5 megawatt a year, price of energy is
0.004 * 7.5 * 1000 = 30 euro an hour
A year that is: 365 * 24 * 30 = 262800 euro a year.
Now what eats 7.5 megawatt if we speak about a cluster. Let's assume
an intel 2 cpu Xeon Sandy Bridge 8 core node and say FDR network,
with a gpu eating 1000 watt a node.
That's 7500 nodes.
What will price be of such node. Say 6000 euro?
So a machine that has a cost of 7500 * 6k = 7.5k * 6k = 45 million
euro, has an energy price of 262800 euro a year.
What are we talking about?
> I don't remember there being a single
> FPGA booth in the past. Whether the accelerator is GPU, FPGA, GRAPE,
> Intem MIC, or something else, I think it's clear that the future
> of HPC
> architecture is going to change radically in the next couple years,
> unless some major breakthrough occurs for commodity processors.
> I think DE Shaw Research's Anton computer, which uses FPGAs and custom
> processors, is an excellent example of what the future of HPC might
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
> To change your subscription (digest mode or unsubscribe) visit
More information about the Beowulf