[Beowulf] Infiniband: MPI and I/O?
bill at Princeton.EDU
Thu May 26 06:18:10 PDT 2011
Wondering if anyone out there is doing both I/O to storage as well as
MPI over the same IB fabric. Following along in the Mellanox User's
Guide, I see a section on how to implement the QOS for both MPI and my
lustre storage. I am curious though as to what might happen to the
performance of the MPI traffic when high I/O loads are placed on the
In our current implementation, we are using blades which are 50%
blocking (2:1 oversubscribed) when moving from a 16 blade chassis to
other nodes. Would trying to do storage on top dictate moving to a
totally non-blocking fabric?
More information about the Beowulf