verifying SMP

Gabriel J. Weinstock gabriel.weinstock at dnamerican.com
Thu Jul 25 17:29:14 PDT 2002


Yes, that may be it. I used top to check the last processor the two processes 
were running on; the results looked random, although often, both processes 
were scheduled for long periods of time on the same processor. I wish I had 
saved the link yesterday, but I recall reading about a library with 
procedures to essentially set process affinity. It might have been for 
Solaris as you said.
Is there a good reference for configure options to mpich btw? for example, I 
am under the impression that the "-with-comm=shared" option optimizes the 
ch_p4 device for clusters of SMP nodes; however, I have been unable to verify 
this.
Anyway, where we stand now is that it is beyond the capabilities of mpich to 
bind a process to a processor. We are running our code with 2 processes per 
node (SMP nodes) and assuming the OS will do the load balancing, which seems 
reasonable.
Thanks for the info,
Gabe

On Friday 26 July 2002 04:21 am, Joachim Worringen wrote:
> Gabriel J. Weinstock:
> > Hi,
> >   Does anyone know a good way to test if a certain process is running on
> > different CPUs on an SMP cluster node? For example, if one were to mpirun
> > -np 4 prog where the machines file looked like
> > node1:2
> > node2:2
> >   how could you verify that one process is being started on each CPU?
> > Using top is one option, but then you're still inferring where the
> > process is running. We're seeing funny numbers in our code and would like
> > to verify where Linux is scheduling processes.
>
> From my experience, this ("processor affinity of processes") is not (yet?)
> available in Linux. The effects you have (which ones exactly?) are probably
> cache effects in cases that a process is running on CPU X for one timeslot,
> and running on CPU Y for the next timeslot (loosing the cache contents of
> CPU X).
>
> The Win32 API (sic!), and i.e. Solaris, have nice functions (see
> http://www.sybase.com/detail/1,6904,1010600,00.html) to do this, and this
> improves performances for certain scenarios (esp. 1:1 mapping of CPU-hungry
> processes). There's still much left to improve in the Linux kernel...
>
>  Joachim



More information about the Beowulf mailing list