[Beowulf] Infiniband and multi-cpu configuration
Gilad Shainer
Shainer at mellanox.com
Fri Feb 8 09:21:26 PST 2008
Hi Daniel,
>
> We'll move our GigE structure to an InfiniBand 4X DDR one (
> prices have dropped quite a bit ). Also we'll build on AMD
> Opteron up to 4 or 8 cores.
>
> In case of 8 cores:
>
> A 4 socket dual-core solution *must* scale better than
> a 2 socket quad-core one, that is talking about memory
> bandwith ( nearly double ).
> On the other hand, the Hypertransport links on Opteron
> 2000/8000 series theorically rated at a 8 GB/s per link, so
> that would be as equal as 4X SDR Infiniband...
>
> A configuration like:
>
> 2 PCs with 2 socket and 2 dual-core Opterons
> linked together with Infiniband 4X DDR ( 8 cores )
>
> Should perform as:
>
> 1 PC with 4 socket ( dual-core ) Opteron based.
>
> Saving cost on Infiniband hardware.
>
As always, depends on the code. I saw cases where it was better to have
more servers and less CPUs per servers, and cases that it was the
opposite.
> When maximizing cores per node, reducing network
> connections and network protocol overhead and considering
> Opteron memory architecture...
> is 8 ( 4 sockets * 2 cores ) an adequate number or a 4 ( 2
> sockets * 2 cores ) is better?
>
> Also onboard memory InfiniBand HCAs must perform better than
> memory-less ones, that is... but how much? any real numbers out there?
>
No, the mem-free HCAs provide the same and in some cases if better
performance than the onboard memory HCAs. Even more, the mem-free HCAs
architecture is more advanced and provided extra goodies. There is a
white paper on Mellanox web site that cover the mem-free architecture
and performance comparison between mem-free and the onboard memory HCAs.
If you will not be able to find it, let me know and I will send you a
link.
Gilad.
More information about the Beowulf
mailing list