[Beowulf] Infiniband: MPI and I/O?
Mark Hahn
hahn at mcmaster.ca
Thu May 26 14:23:30 PDT 2011
> Agreed. Just finished telling another vendor, "It's not high speed
> storage unless it has an IB/RDMA interface". They love that. Except
what does RDMA have to do with anything? why would straight 10G ethernet
not qualify? I suspect you're really saying that you want an efficient
interface, as well as enough bandwidth, but that doesn't necessitate RDMA.
> for some really edge cases, I can't imagine running IO over GbE for
> anything more than trivial IO loads.
well, it's a balance issue. if someone was using lots of Atom boards
lashed into a cluster, 1Gb apiece might be pretty reasonable. but for
fat nodes (let's say 48 cores), even 1 QDR IB pipe doesn't seem all
that generous.
as an interesting case in point, SeaMicro was in the news again with a 512
atom system: either 64 Gb links or 16 10G links. the former (.128 Gb/core)
seems low even for atoms, but .3 Gb/core might be reasonable.
> I am Curious if anyone is doing IO over IB to SRP targets or some
> similar "Block Device" approach. The Integration into the filesystem by
> Lustre/GPFS and others may be the best way to go, but we are not 100%
> convinced yet. Any stories to share?
you mean you _like_ block storage? how do you make a shared FS namespace
out of it, manage locking, etc?
regards, mark hahn.
More information about the Beowulf
mailing list