[Beowulf] Infiniband: MPI and I/O?
Joe Landman
landman at scalableinformatics.com
Thu May 26 12:35:35 PDT 2011
On 05/26/2011 03:29 PM, Greg Keller wrote:
> Agreed. Just finished telling another vendor, "It's not high speed
> storage unless it has an IB/RDMA interface". They love that. Except
Heh ... love it!
> for some really edge cases, I can't imagine running IO over GbE for
> anything more than trivial IO loads.
Lots of our customers do, when they have a large legacy GbE network, and
upgrading is expensive. We can have a very large fan in to our units,
but IB (even SDR!) is really nice to move data over for storage.
> I am Curious if anyone is doing IO over IB to SRP targets or some
> similar "Block Device" approach. The Integration into the filesystem by
Both block and file targets. SRPT on our units, and fronted by OSSes
for Lustre and similar like things. Can do iSCSI as well (over IB using
iSER, or over 10GbE ... works really nicely in either case).
> Lustre/GPFS and others may be the best way to go, but we are not 100%
> convinced yet. Any stories to share?
If you do this with Lustre, make sure your OSSes are in HA pairs using
pacemaker/ucarp, and use DRBD between backend units, or MD on the OSS to
mirror the storage. Unfortunately IB doesn't virtualize well (last I
checked), so these have to be physical OSSes. I presume something
similar on GPFS.
GlusterFS, PVFS2/OrangeFS, etc. go fine without the block devices, and
Gluster does mirroring at the file level.
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman at scalableinformatics.com
web : http://scalableinformatics.com
http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax : +1 866 888 3112
cell : +1 734 612 4615
More information about the Beowulf
mailing list