[Beowulf] MPI OSCAR 3.0 on the BEOWULF cluster

Antonio Parodi antonio at cima.unige.it
Wed Nov 17 05:27:20 PST 2004


Good morning,
I am using a cluster with the following charactestics:

BEOWULF 
MPI OSCAR 3.0
RED HAT 9
11 NODES: 1 NODE PRINCIPAL
          10 SUBNODES: EACH SUBNODE HAS 2 PROCESSOR P4 XEON 2.8 GIGA
          NO IPERTRADING
          2 giga RAM DUAL CHANNEL
          EACH SUBNODE HAS 200 GIGA: EIDE 7200, 8 MEGA BUFFER

I want to use this cluster to test the scalability of a numerical code
using 1, 2, 4, 8 processors. For example I would like to test the code
with 4 processors but I am not able to force it to use 4 subnodes
(that is one processor for each subnode) instead of 2 subnodes (2
processor for each subnode) as the cluster does. In this way the
cluster creates local conflict and memory sharing problems in each
subnode, decreasing the code performances
This is surprising for me since I use the following script to run the
simulations in which the line 3 prescribes the use of a processor for
each subnode if it possible
            
#!/bin/csh
#PBS -m e
#PBS -l nodes=4:ppn=1
#PBS -l walltime=9999:00:00
#PBS -M user at domain
#PBS -j oe
#PBS -o rb.out
#PBS -N rb
#PBS
limit coredumpsize 0
set NN = `cat $PBS_NODEFILE | wc -l`
echo "NN = "$NN
#cd $PBS_O_WORKDIR
cd /home/antonio/test_paper_numerico/RB1E5
pwd
cat $PBS_NODEFILE  > newlist
date
time mpirun  -machinefile newlist -np $NN rb > nav2.log
date

I hope that someone can helps me
Ciao
Antonio

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.scyld.com/pipermail/beowulf/attachments/20041117/0f1397cc/attachment.html


More information about the Beowulf mailing list