[Beowulf] First 96-Node Transmeta Desktop Cluster Ships

Vincent Diepeveen diep at xs4all.nl
Wed May 4 10:08:34 PDT 2005


Yes it is very interesting.

However their sales price of $100k they initially quoted i found a bit
expensive.

I don't want to be rude, but these processors are quite slow if you just
look to 1 processor and they go back in performance after a while when they
go hot.

So a fully loaded system will have a far smaller performance in the long
run than it has in the short run. 

Of course that has its charm too, a system that's fast if you just use it
real short at a "critical" moment.

So the quoted performance is still impressive then.

The network is not so impressive what each node is connected to another
node. That is not relevant for embarrassingly parallel software though.

Relevant is the price per gflop IMHO.

For a NEW product that tries to fight itself into the market, you must
simply be factors cheaper than the competition. $100k i find a tad expensive.

So to speak you can buy 10 quad dual core opterons 1.8Ghz for that price.

that's 80 processors x 3.6 gflop = 300 gflop too.

So effectively 10 quad opterons are faster in floating point than this
orion machine.

It's true however that the 10 quads eat a bigger power bill.

But if you can afford paying $100k that won't be a problem either.

And this said, i do not feel the opteron is a good floating point
processor. I would instead say it sucks ass for floating point. So if it
outperforms a floating point solution in dollar per gflop, then there is
something wrong with the system in question.

So real interesting is knowing the price it gets sold for now. If it gets
sold for $10k for 150+ gflops, then that is a good buy IMHO.

Nevertheless if we consider the fact that this company just starts to make
clustered systems and already so quickly puts such an impressive product in
the market they sure have my compliments. 

IMHO they are the right type of company to make a clustered cell type
processor system (if ibm wants to deliver them cpu's). Their system needs a
fast processor that uses little power. The philosophy i really like.

Please note it would perform horrible for my own software, as it is latency
sensitive software, so anything i write down here is meant for software
other than my searching software.

At 07:06 AM 5/4/2005 -0700, Jim Lux wrote:
>It's an interesting concept. I spoke with the folks at Orion last year, and
>they've identified that "zero infrastructure hassle" aspect as a key point.
>Has to plug into a single wall socket, for instance. The other thing is that
>they're pushing it as a minimal adminstration widget, which may or may not
>come off. That is, there's no expectation that the end user/owner will be
>rolling their own kernel mods, swapping processors or disks, etc.
>
>Maybe the conceptual model is to compare it to what desktop PCs, or maybe a
>Sun, were in the 80s, relative to a VAX or mainframe down the hall.  With
>the former, you decide when to turn it on or off, you decide what runs on it
>and when.  With the latter, you compete for resources with all the other
>users sharing the investment.
>
>The question will be whether enough useful application software is available
>in a "orion compatible" form so that the casual user doesn't get sucked into
>an admin morass. I would think that if Orion and the vendors of products
>like HFSS or ADS or NASTRAN (all big computationally intensive FEM style
>codes) get together to provide a "turnkey" installation with significantly
>higher performance it will fly.
>
>If it can make it possible to change the modeling usage paradigm from
>"batch" to "interactive" then it will have real value.  Rather than think in
>terms of "build model, submit job, do something else while waiting for
>results to come back" if you can think in terms of "Build model, wait 30
>seconds, look at results, change parameter, wait 30 seconds, look at
>results", you'll have a different style of use.
>
>I noticed that when computers got fast enough to do Numerical
>Electromagnetics Code (NEC) models in seconds, as opposed to minutes, my
>design style changed.  Instead of spending a few hours writing scripts to
>fire off a whole systematic batch of runs to do a parametric study
>(typically overnight) and then look at the plots the next morning, I'd
>manually optimize the design by iterating the parameters.  In these sorts of
>things, the "goal function" is sort of ill defined: I want a reasonably good
>impedance match, and no huge side or back lobes, where "reasonably good" and
>"huge" are sort of fuzzy concepts.
>
>Jim Lux
>
>----- Original Message -----
>From: "Eugen Leitl" <eugen at leitl.org>
>To: <Beowulf at beowulf.org>
>Sent: Wednesday, May 04, 2005 3:29 AM
>Subject: [Beowulf] First 96-Node Desktop Cluster Ships
>
>
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org
>> To change your subscription (digest mode or unsubscribe) visit
>http://www.beowulf.org/mailman/listinfo/beowulf
>>
>
>_______________________________________________
>Beowulf mailing list, Beowulf at beowulf.org
>To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
>
>



More information about the Beowulf mailing list