[Beowulf] AMD 6100 vs Intel 5600
Hearns, John
john.hearns at mclaren.com
Thu Apr 1 03:27:36 PDT 2010
>
> > Various vendors try various strategies to differentiate products
> based
> > on features. For the most part HPC types care about performance per
> $,
> > performance per watt, and reliability. I'd be pretty surprised to
> see large
> > HPC cluster built out of Nehalem-EX chips.
Look at the announcement yesterday of the SGI UV 10 - 4xNehalem EX and 512Gbytes memory in a
4U box. There will be similar spec boxes from other vendors. I can see this being a very attractive
workgroup solution.
There's a very good recent Linux Journal article by Doug Eadline - where he discusses the future direction of clusters (*)
Many workgroups have codes which scale to these 32- and 48-core sizes - why have a humungous cluster with expensive interconnects
when you can run a 32-way job on an SMP machine with a decent amount of RAM?
So my present dream system - a rack of 10 Ultraviolets, connected by 10gig Ethernet to a Blade systems rack top switch.
In a 42 U rack that leaves me with a 1U for a batch master/login/PXE boot node.
Connect it across to a rack of Panasas shelves, similarly with a 10gig racktop switch and you have a pretty powerful system -
set your scheduler up to farm out jobs to each of these fat SMP nodes. If you do have a call for a bigger core count you can run
as a cluster over the 10gig links.
(*)http://www.linux-mag.com/id/7731
"These numbers are confirmed by a poll from ClusterMoney.net where 55% of those surveyed used 32 or less cores for their applications. When the clouds start forming around 48-core servers using the imminent Magny Cours processor from AMD many applications may fit on one server and thus eliminate the variability of server-to-server communication."
The contents of this email are confidential and for the exclusive use of the intended recipient. If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy.
More information about the Beowulf
mailing list