I assume these are MSI-X interrupts of the one Mellanox driver instance. This feature allows to spread interrupts more or less evenly across CPUs, in conjunction with multiple send/recv queues.<br><br>Each PCI device has a single driver (unless we talk about virtualized I/O, which does not apply here). But a single driver can serve any number of interrupts.<br>
<br> Joachim<br><br><div class="gmail_quote">On Fri, Oct 23, 2009 at 2:25 AM, Robert Kubrick <span dir="ltr"><<a href="mailto:robertkubrick@gmail.com">robertkubrick@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
I noticed my machine has 16 drivers in the /proc/interrupts table marked as eth-mlx4-0 to 15, in addition to the usual mlx-async and mlx-core drivers.<br>
The server runs Linux Suse RT, has an infiniband interface, OFED 1.1 drivers, and 16 Xeon MP cores , so I'm assuming all these eth-mlx4 drivers are supposed to do "something" with each core. I've never seen these irq managers before. When I run infiniband apps the interrupts go to both mlx-async and eth-mlx4-0 (just 0, all the other drivers don't get any interrupts). Also the eth name part looks suspicious.<br>
<br>
I can't find any reference online, any idea what these drivers are about?<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</blockquote></div><br>