[Beowulf] More multiple things per node

Mark Hahn hahn at physics.mcmaster.ca
Tue Jan 31 06:26:57 PST 2006


> > I understand about multiple NICs per node (done that). I've got SMP 
> > nodes, how do I "bond" a NIC to a CPU in MPI 1.2x?

why do you think you want to do this?  IMO, the main point of buying 
an SMP machine is to gain scheduling flexibility, so you don't have to 
bind a single process to a single cpu to a single IO device.

but you can certainly use /proc/irq/#/smp_affinity to bind a device
to a cpu.  I figure on opterons, it makes sense to have them all bound
to the CPU closest on the HT topology to the IO tunnel(s), for instance,
though I've never managed to measure any advantage.  there's also
/sys/class/pci_bus/0000:01/cpuaffinity, but I don't know what that does.

> That's the job of your MPI library. MPI has no standardized interface for this, 
> but a quality MPI implementation should to the correct thing automatically. If 
> not, you need to call some system functions yourself (and loose portability).

I think this is actually referring to binding a process to a cpu, no?
messing with cpu-nic bindings doesn't really seem like MPI's business,
since it knows the CPUs only through a socket...

the sched_setaffinity interface seems simple enough, but if using it makes 
a big difference, then IMO you've found a scheduler bug.  certainly the
normal scheduler will not move processes around willy-nilly...





More information about the Beowulf mailing list