Parallel Charmm and RedHat 7.1
David Chalmers
david.chalmers at vcp.monash.edu.au
Thu Jun 21 01:12:59 PDT 2001
Hi All,
I am trying to run the parallel version of Charmm (27b4) on our
Linux/Intel cluster. I have managed to get Charmm compiled using both f2c
g77 (the lite version) with lam-6.3.1. I can get Charmm to run using
mpirun on a single processor but it dies (or is killed) with two.
Can anybody help me with a pointer to where I am going wrong?
David
Here is some input and output:
<grendel04 /test/c27test> lamboot -v hostfile
LAM 6.3.1/MPI 2 C++ - University of Notre Dame
Executing hboot on n0 (grendel04)...
Executing hboot on n1 (grendel03)...
topology done
<grendel04 /test/c27test> cat schema
/grendel/apps/charmm27/c27b4_f2c_p/exec/gnu/charmm n0 --
/grendel/apps/charmm27/c27b4_f2c_p/exec/gnu/charmm n1 --
<grendel04 /test/c27test> mpirun -c2c -w -O schema < ace2.inp
1
Chemistry at HARvard Macromolecular Mechanics
(CHARMM) - Developmental Version 27b4 February 15, 2001
Copyright(c) 1984,1992 President and Fellows of Harvard College
All Rights Reserved
Current operating system: GNU LINUX
Created on 6/21/ 1 at 18: 7:51 by user: david
Maximum number of ATOMS: 60120, and RESidues: 32000
Current HEAP size: 10240000, and STACK size: 900000
RDTITL> * CHARMM TESTCASE ENUM.INP
RDTITL> * AUTHOR: CHRISTIAN BARTELS
RDTITL> * FILES: TOPH19.RTF, PARAM19-1.2.INP
RDTITL> * TESTS: ACE + ADAPTIVE UMBRELLA SAMPLING OF THE POTENTIAL ENERGY
RDTITL> * (PEPTIDE FOLDING SIMULATIONS)
RDTITL> *
CHARMM>
CHARMM> ! setup system
CHARMM> stream /grendel/apps/charmm27/c27b4/test/datadir.def
VOPEN> Attempting to open::/grendel/apps/charmm27/c27b4/test/datadir.def::
OPNLGU> Unit 99 opened for READONLY access to /grendel/apps/charmm27/c27b4/test/datadir.def
INPUT STREAM SWITCHING TO UNIT 99
RDTITL> * CHARMM TESTCASE DATA DIRECTORY ASSIGNMENT
RDTITL> *
Parameter: IN1 <- "" <empty>
CHARMM> faster on
MISCOM> FAST option ON.
Scalar FAST Mode
CHARMM> set 0 /grendel/apps/charmm27/c27b4/test/data/ ! input data directory
Parameter: 0 <- "/GRENDEL/APPS/CHARMM27/C27B4/TEST/DATA/"
CHARMM> set 9 /grendel/apps/charmm27/c27b4/test/scratch/ ! scratch directory
Parameter: 9 <- "/GRENDEL/APPS/CHARMM27/C27B4/TEST/SCRATCH/"
CHARMM> return
VCLOSE: Closing unit 99 with status "KEEP"
RETURNING TO INPUT STREAM 5
CHARMM>
CHARMM> open unit 11 read form name @0toph19.rtf
Parameter: 0 -> "/GRENDEL/APPS/CHARMM27/C27B4/TEST/DATA/"
VOPEN> Attempting to open::/grendel/apps/charmm27/c27b4/test/data/toph19.rtf::
OPNLGU> Unit 11 opened for READONLY access to /grendel/apps/charmm27/c27b4/test/data/toph19.rtf
CHARMM> read rtf card unit 11
MAINIO> Residue topology file being read from unit 11.
TITLE> * TOPOLOGY FILE FOR PROTEINS USING EXPLICIT HYDROGEN ATOMS: VERSION 19
TITLE> *
mpirun: process terminated before completing MPI_Init()
1
Chemistry at HARvard Macromolecular Mechanics
(CHARMM) - Developmental Version 27b4 February 15, 2001
Copyright(c) 1984,1992 President and Fellows of Harvard College
All Rights Reserved
Current operating system: GNU LINUX
Created on 6/21/ 1 at 18: 4:57 by user: david
Maximum number of ATOMS: 60120, and RESidues: 32000
Current HEAP size: 10240000, and STACK size: 900000
RDTITL> No title read.
***** LEVEL 1 WARNING FROM <RDTITL> *****
***** Title expected.
******************************************
BOMLEV ( 0) IS NOT REACHED. WRNLEV IS 5
NORMAL TERMINATION BY END OF FILE
MAXIMUM STACK SPACE USED IS 0
STACK CURRENTLY IN USE IS 0
MOST SEVERE WARNING WAS AT LEVEL 1
HEAP PRINTOUT- HEAP SIZE 10240000
SPACE CURRENTLY IN USE IS 0
MAXIMUM SPACE USED IS 480
FREE LIST
PRINHP> ADDRESS: 1 LENGTH: 10240000 NEXT: 0
$$$$$ JOB ACCOUNTING INFORMATION $$$$$
ELAPSED TIME: .00 SECONDS
CPU TIME: .02 SECONDS
Killed
<grendel04 /test/c27test>
_____________________________________________________________________________
David Chalmers Lab: 9903 9110
Victorian College of Pharmacy Fax: 9903 9582
381 Royal Pde, Parkville, Vic 3053 http://synapse.vcp.monash.edu.au
Australia David.Chalmers at vcp.monash.edu.au
_____________________________________________________________________________
More information about the Beowulf
mailing list