<p dir="ltr">I noticed on systems running xen-kernel netback driver for virtualization, bandwidth drops to very low rates. </p>
<div class="gmail_quote">On Apr 27, 2013 6:19 PM, "Brice Goglin" <<a href="mailto:brice.goglin@gmail.com">brice.goglin@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hello,<br>
<br>
These cards are QDR and even FDR, you should get 56Gbit/s (we see about<br>
50Gbit/s in benchmarks iirc). That what I get on sandy-bridge servers<br>
with the exact same IB card model.<br>
<br>
$ ibv_devinfo -v<br>
[...]<br>
active_width: 4X (2)<br>
active_speed: 14.0 Gbps (16)<br>
<br>
<br>
These nodes have been running Debian testing/wheezy (default kernel and<br>
IB packages) for 9 months without problems.<br>
<br>
I had to fix the cables to get 56Gbit/s link state. Without Mellanox FDR<br>
cables, I was only getting 40. So maybe check your cables. And if you're<br>
not 100% sure about your switch, try connecting the nodes back-to-back.<br>
<br>
You can try upgrading the IB card firmware too. Mine is 2.10.700 (likely<br>
not uptodate anymore, but at least this one works fine).<br>
<br>
Where does your "8.5Gbit/s" come from? IB status or benchmarks? If<br>
benchmarks, it could be related to the PCIe link speed. Upgrading the<br>
BIOS and IB firmware help me too (some reboot gave PCIe Gen1 instead of<br>
Gen3). Here's what you should see in lspci if you get PCIe Gen3 8x as<br>
expected:<br>
<br>
$ sudo lspci -d 15b3: -vv<br>
[...]<br>
LnkSta: Speed 8GT/s, Width x8<br>
<br>
<br>
Brice<br>
<br>
<br>
<br>
<br>
Le 27/04/2013 22:05, Jörg Saßmannshausen a écrit :<br>
> Dear all,<br>
><br>
> I was wondering whether somebody has/had similar problems as I have.<br>
><br>
> We have recenctly purchased a bunch of new nodes. These are Sandybridge ones<br>
> with Mellanox ConnectX-3 MT27500 InfiniBand connectors and this is where I got<br>
> problems with.<br>
><br>
> I am usually using Debian Squeeze for my clusters (kernel 2.6.32-5-amd64).<br>
> Unfortunately, as it turned out I cannot use that kernel as my Intel NIC is<br>
> not supported here. So I upgraded to 3.2.0-0.bpo.2-amd64 (backport kernel to<br>
> sqeeze). Here I got network but the InfiniBand is not working. The device is<br>
> not even recognized by ibstatus. Thus, I decided to do an upgrade (not dist-<br>
> upgrade) to wheezy to get the newer OFED stack.<br>
><br>
> Here I get the InfiniBand working but only with 8.5 Gb/sec. A simple reseating<br>
> of the plug increases that to 20 Gb/sec (4X DDR), which is still slower than<br>
> the speed of the older nodes (40 Gb/sec (4X QDR)).<br>
><br>
> So I upgraded completely to wheezy (dist-upgrade now) but the problem does not<br>
> vanish.<br>
> I re-installed squeeze again and installed a vanilla kernel (3.8.8) and the<br>
> latest OFED stack from their site. And guess what: same experiences here:<br>
> After a reboot the IfniniBand speed is 8.5 and reseating the plug increases<br>
> that to 20 Gb/sec. It does not matter whether I connect to the edge switch or<br>
> to the main switch, in both cases I got the same experiences/observations.<br>
><br>
> Frankly, I am out of ideas now. I don't think the observed speed change after<br>
> reseating the plug should happen. I am in touch with the technical support<br>
> here as well but I think we both are a bit confused.<br>
><br>
> Now, am I right to assume that the Mellanox ConnectX-3 MT27500 are QDR cards<br>
> so I should get 40 Gb/sec and not 20 Gb/sec?<br>
><br>
> Has anybody made similar experiences? Any ideas?<br>
><br>
> All the best from London<br>
><br>
> Jörg<br>
><br>
><br>
<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</blockquote></div>