Problem with MPICH and Intel and PGI Compilers on Linux

Jeff Layton jeffrey.b.layton at
Tue Feb 4 04:06:05 PST 2003

Good morning,

   I'm having trouble with my MPI code using MPICH-1.2.4 or 1.2.5 and
the Intel Fortran/C compilers for Linux (v. 6.0 and 7.0 of the compilers)
and the PGI compilers for linux (v4.0.2).
   I'm running on a Linux cluster of Xeon/2.4 GHz CPUs (2 per board)
that is connected via GigE through a Foundry switch. The nodes are
running RH 7.3 Linux (2.4.20 kernel - created by IBM).
   MPICH-1.2.4 was built and installed by IBM using the Intel 7.0
compilers for Linux (Fortran and C). I built MPICH-1.2.5 in my home
account using both the Intel 7.0 compilers and the Intel 6.0 compilers.
MPICH-1.2.4 was built with the PGI compilers (v. 4.0.2).
   So, we have six combinations, the three compilers, and the two versions
of MPICH. In all 6 cases, my code stops, hitting a logic check reading the
input data and stops based on this value. However, before it exits (stops),
the code spits out the rank for each process. Here is a snippet of the
output of the code:

Before call to rg_one: myrank -1073743612
 Error: partition with max. cells not found

Notice that the rank is incorrect.
   This happens for all 6 combinations. However, when I test the exact
same code using MPI/Pro, it works correctly (for all 3  compilers).
Also, the code works correctly with MPI/Pro and Myrinet and both
Intel compilers and the PGI compiler.
   Any help is greatly appreciated.




Jeff Layton
Senior Engineer
Lockheed-Martin Aeronautical Company - Marietta
Aerodynamics & CFD

"Is it possible to overclock a cattle prod?" - Irv Mullins

This email may contain confidential information. If you have received this
email in error, please delete it immediately, and inform me of the mistake by
return email. Any form of reproduction, or further dissemination of this
email is strictly prohibited. Also, please note that opinions expressed in
this email are those of the author, and are not necessarily those of the
Lockheed-Martin Corporation.

More information about the Beowulf mailing list