[Beowulf] FDR Infiniband; Stampede at TACC
Eugen Leitl
eugen at leitl.org
Mon Nov 4 02:59:40 PST 2013
Some very fine cableporn, indeed:
http://www.reddit.com/r/cableporn/comments/1ptzna/fdr_infiniband_stampedetacc/
http://imgur.com/a/cftAg
[–]RedneckBob 9 points 7 hours ago
MOTHER OF GOD:
Each node is outfitted with an InfiniBand HCA card. The Stampede interconnect is FDR InfiniBand that delivers network performance of 56Gb/s to the node. The combined bandwidth of Stampede is nearly the same as that of 60,000,000 broadband-connected households. This is more than the combined residential bandwidth of AT&T, Comcast, Time Warner, and Verizon (reference). Stampede can move about twice as much data across its network per second as the entire Internet, with a latency of only 2 microseconds (reference).
permalinkreportgive goldreply
[–]itsthehumidity 7 points 6 hours ago
For someone who doesn't know very much about this kind of thing, but enjoys the subreddit, what is this equipment for?
permalinkparentreportgive goldreply
[–]Maxolon 1 point 1 hour ago
Porn. All of it.
permalinkparentreportgive goldreply
[–]tonsofpcs 5 points 7 hours ago
I was with you until the last picture. WHY are they resting on the open tops of racks? Get a tray/ladder!
permalinkreportgive goldreply
[–]frozentoad 2 points 6 hours ago
Yes, or a 4-6 inch trough. Also, the labels are coming off at the patch panel, get some scotch tape on those op.
permalinkparentreportgive goldreply
[–]NightOfTheLivingHam 4 points 12 hours ago
we were contemplating infiniband for our cluster in the long run. how does iscsi handle over if?
permalinkreportgive goldreply
[–]tidderwork[S] 4 points 12 hours ago
how does iscsi handle over if?
Unfortunately, I don't know. I have never tried. Our clusters use parallel filesystems like fhgfs and glustre. Where simple network storage is needed over infiniband, we use NFS.
Maybe I'll spin up a lab to test iSCSI over IB. I have recently tested VMWare esxi with Infiniband and NFS, and it works very well.
EDIT: that said, i would seriously consider looking at 10gbE. Unless you have a real need for 20+ gbps links, 10gbE would probably work just fine and be much less expensive. Infiniband can make you go grey prematurely. Avoid it if you can.
permalinkparentreportgive goldreply
[–]senses3 2 points 6 hours ago
Wow, I've never heard of fhgfs before. That's freakin awesome!
permalinkparentreportgive goldreply
[–]NightOfTheLivingHam 2 points 12 hours ago
and what amazes me is how cheap IB is compared to FCoE and 10GB.
permalinkparentreportgive goldreply
[–]tidderwork[S] 5 points 12 hours ago
Lately, 10gbE has been very competitive in price. Time spent managing IB is also a factor. 10gbE is dead simple to install, cable, and support. IB support is often the stuff of nightmares.
permalinkparentreportgive goldreply
[–]jaargon 5 points 11 hours ago
As the SA for a number of IB fabrics, I share your sentiment about IB support being difficult. Lately I've been thinking that IB is the opposite of ethernet in the sense that a single misbehaving host can prevent the entire fabric from working (e.g. by being a slow consumer or due to a bug in the ofed stack).
I've gotten an email at 6:30am probably a dozen times this year saying that "IB is broken" after some completely unrelated maintenance was done on a host the night before. P.S., please get it working by 9:30am for the market open.
permalinkparentreportgive goldreply
[–]slowofthought 2 points 9 hours ago
Particularly when you can get an enterprise class 10GbE switch at ~$550/port and squeeze 128 line rate ports into 2U. 10GbE becomes very attractive for nearly every network out there today.
I haven't the foggiest what something like Stampede would require from a throughput perspective while at load but I imagine the IB still offers quite the performance advantage in this scenario.
I imagine Dell would bend over backward to give you a pair of Z9000's, OP. ;)
permalinkparentreportgive goldreply
[–]tidderwork[S] 1 point 9 hours ago
I haven't the foggiest what something like Stampede would require from a throughput perspective while at load but I imagine the IB still offers quite the performance advantage in this scenario.
Latency is also critical for the kinds of simulations Stampede does.
permalinkparentreportgive goldreply
[–]slowofthought 2 points 9 hours ago
Ah yes another factor which would definitely play to IB's strengths. I work primarily on storage infrastructure in private clouds so I doubt I have nearly the performance requirements that something as unique as Stampede would require. That is truly the .01% we appreciate you driving down those costs on 10GbE tho!
permalinkparentreportgive goldreply
[–]senses3 2 points 6 hours ago
I'm thinking about connecting my SAN box to vsphere with IB. Do you have any tips or good posts on configuring IB? Generally what things usually go wrong with it and other such stuff.
permalinkparentreportgive goldreply
[–]OnTheMF 2 points 11 hours ago*
Infiniband supports SRP, which is similar to fiber's FCP. Both technologies are essentially RDMA support for SCSI. Using iSCSI on either platform will offer much less in terms of performance. With that said, InfiniBand also offers iSER which is an RDMA extension for iSCSI. If your hardware supported it, then you'd see significant performance benefits over iSCSI on other types of networks.
permalinkparentreportgive goldreply
[–]punk1984 4 points 9 hours ago
This makes my boyparts tingle.
permalinkreportgive goldreply
[–]_Iridium 1 point 3 hours ago
[looks at pretty pictures think to self] "Wow, thats a lot of nicely bundled cat6 cabl...HOLY SHIT THATS ALL FIBER!"
More information about the Beowulf
mailing list