[Beowulf] Q: IB message rate & large core counts (per node)?
Brian Dobbins
bdobbins at gmail.com
Fri Feb 19 10:25:07 PST 2010
Hi guys,
I'm beginning to look into configurations for a new cluster and with the
AMD 12-core and Intel 8-core chips 'here' (or coming soonish), I'm curious
if anyone has any data on the effects of the messaging rate of the IB
cards. With a 4-socket node having between 32 and 48 cores, lots of
computing can get done fast, possibly stressing the network.
I know Qlogic has made a big deal about the InfiniPath adapter's extremely
good message rate in the past... is this still an important issue? How do
the latest Mellanox adapters compare? (Qlogic documents a ~30M messages
processsed per second rate on its QLE7342, but I didn't see a number on the
Mellanox ConnectX-2... and more to the point, do people see this effecting
them?)
On a similar note, does a dual-port card provide an increase in on-card
processing, or 'just' another link? (The increased bandwidth is certainly
nice, even in a flat switched network, I'm sure!)
I'm primarily concerned with weather and climate models here - WRF, CAM,
CCSM, etc., and clearly the communication rate will depend to a large degree
on the resolutions used, but any information, even 'gut instincts' people
have are welcome. The more info the merrier.
Thanks very much,
- Brian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20100219/615f1592/attachment.html>
More information about the Beowulf
mailing list