[Beowulf] itanium vs. x86-64
lindahl at pbm.com
Thu Feb 12 15:01:43 PST 2009
On Wed, Feb 11, 2009 at 07:52:04AM +1100, Michael Brown wrote:
> I've got a zx2000 (1.5 GHZ/6 MB Madison processor, 2 GB PC2100 RAM,
> general system details at http://www.openpa.net/systems/hp_zx2000.html)
> that I use for testing and benchmarking. Obviously there some difference
> in performance characteristics between this machine and a
> gazillion-processor Altix, but it's usually not too far off. If there's
> any code you want tested feel free to email me (replace spambox with
> michael if you think your email will upset SpamAssassin). It's running
> Debian with ICC 10.1 20080801. It's also got GCC 4.1.2, but IME using GCC
> instead of ICC on IA64 results in somewhat reduced performance, to say
> the least.
That's a great setup for an apples to oranges to kumquats comparison.
Mersenne Twister was a nice poster-child for the pathopt compiler flag
finder in the PathScale compiler suite: we found some flags which got
a big speedup. I don't think it was the same implementation that
you're talking about, though.
> Of course, clock for clock doesn't help all that much when the top-end
> Core 2 is running about twice as fast as the top-end Itanium, and is much
.. which makes me wonder why you were thinking about per-clock
performance at all. It's pretty meaningless.
> The main thing I've seen going for the Itanium in HPC is SGI's NUMALink.
> A colleague of mine is developing some quantum mechanics simulation
> stuff, and scaling on the ANU Altix is great. Scaling on a Woodcrest Xeon
> cluster using Infiniband ... poor to the point of almost not worth going
> outside a single node.
One hopes that he tried that other InfiniBand HCA, you know, the one
that at SDR rates was able to beat Altixes on a buch of codes. Tom
Elkin can probably hook you up with my old whitepapers on the topic.
NumaLink is pretty good, but it's not unmatched, unless you are stuck
programming in a shared memory paradigm.
More information about the Beowulf