[Beowulf] Naive question: mpi-parallel program in multicore CPUs
Gerry Creager
gerry.creager at tamu.edu
Tue Oct 2 07:50:18 PDT 2007
Just for the record, I hate HTML-encoded e-mails.
Li at mx2.buaa.edu.cn wrote:
> Hello,
> ----------------------------------------------------
>> This is perhap a naive question.
>>
>> 10 years before we started using the SP2, but we later changed to Intel
>> based linux beowulf in 2001. In our University there are quite a no. of
>> mpi-based parallel programs running in a 178 node dual-Xeon PC cluster
>> that was installed 4 years ago.
>>
>> We are now planning to upgrade our cluster in the coming year. Very
>> likely blade servers with multi-core CPUs will be used. To port these
>> mpi-based parallel programs to a multi-core CPU environment, someone
>> suggested that OpenMP should be used, such that the programs can be
>> converted to a multi-thread version. Nevertheless it may take time, and
>> the users may be reluctant to do so. Also for some of the installed
>> programs, we don't have the source code.
>>
>> Another user suggested that we may change slightly on the .machinefile
>> before executing the "mpirun" command.
>>
>> Suppose we are going to run a 8 mpi-task program on a quad-core cluster,
>> then only 2 CPUs should be selected, with the ".machinefile" looks like
>> "cpu0 cpu1 cpu0 cpu1 cpu0 cpu1 cpu0 cpu1" created, i.e. 4 mpi-tasks will
>> be spooled to CPU0 and 4 mpi-tasks will be spooled to CPU1. But the REAL
>> question will be:
>> Will EACH mpi-task be executed on ONE single core?
>> If not, then could there be any Linux utility program to help?
>>
> Generally, each mpi-task should be executed on a single core, and if not, you can run 4 mpid on a single node.
>> I asked this question to one of the potential vendor, and the sales
>> suddenly suggested "Well, you can buy VMWARE to create virtual CPUs to do
>> so." Do you think it is logical?
Selection of OpenMP vs MPI, or the combination of the two, depends
considerably on how your code funcitons. We have been working with
dual-core, dual-processor (4-cores/node) machines for a couple of years,
running exclusively MPI codes, and seen very good performance.
Similarly, (yeah, guys, I know it's not a Beowulf but...) I run weather
forecast (WRF) codes on an IBM Cluster 1600 ) (p575) system using 8 of
16 coreres and 32 nodes (don't ask why: silly sysadmin tricks). The WRF
codes will run as SMP, DM or a combination of both but, for the domains
I tend to forecast over, are more efficient using just MPI.
We see no problems with multicore machines running MPI. It would be
worth encouraging your users to evaluate how their codes would run if
enabled for a combination of OpenMP and MPI but not mandatory.
gerry
--
Gerry Creager -- gerry.creager at tamu.edu
Texas Mesonet -- AATLT, Texas A&M University
Cell: 979.229.5301 Office: 979.862.3982 FAX: 979.862.3983
Office: 1700 Research Parkway Ste 160, TAMU, College Station, TX 77843
More information about the Beowulf
mailing list