[Beowulf] Q: IB message rate & large core counts (per node)?
lindahl at pbm.com
Fri Feb 19 13:57:30 PST 2010
On Fri, Feb 19, 2010 at 01:25:07PM -0500, Brian Dobbins wrote:
> I know Qlogic has made a big deal about the InfiniPath adapter's extremely
> good message rate in the past... is this still an important issue?
Yes, for many codes. If I recall stuff I published a while ago, WRF
sent a surprising number of short messages. But really, the right
approach for you is to do some benchmarking. Arguing about
microbenchmarks is pointless; they only give you clues that help
explain your real application results. I believe that both QLogic and
Mellanox have test clusters you can borrow.
Tom Elken ought to have some WRF data he can share with you, showing
message sizes as a function of cluster size for one of the usual WRF
> On a similar note, does a dual-port card provide an increase in on-card
> processing, or 'just' another link? (The increased bandwidth is certainly
> nice, even in a flat switched network, I'm sure!)
Published microbenchmarks in for Mellanox parts the SDR/DDR generation
showed that only large messages got a benefit. I've never seen any
application benchmarks comparing 1 and 2 port cards.
(formerly the system architect of InfiniPath's SDR and DDR generations)
More information about the Beowulf