[Beowulf] NASTRAN on cluster
Currit, Dennis
Dennis_Currit at atk.com
Mon Apr 11 10:01:14 PDT 2005
We just installed a small cluster and are running NASTRAN 2005 on it. This is the first cluster we have set up, so we are beginners. Anyway, the cluster consists of 5 Dell Precision 470 workstations running Fedora Linux. Each has 4 GB RAM, 2 160 GB SATA drives and dual Xeon 2.8 ghz processors. It seems to run pretty well; there were no real tricks. Some of the NASTRAN bdf files needed to be modified a little in order to run, usually by removing statements that assigned specific file paths and names to scratch and dbs files. One thing to consider is that NASTRAN doesn't seem to make very good use of dual processor machines. For example, I have 5 dual processor machines. If I specify dmp=10 on the NASTRAN command, it starts 2 processes on each machine and runs MUCH slower than if I had it only started 1 process per machine. There is a (undocumented?) command (sys107=2) that specifies that each machine has two processors. It improves performance somewhat. On a test job, I got the following results:
Single Machine 523 minutes
5 node cluster (dmp=5) 148 minutes
5 node cluster (dmp=10) 199 minutes
5 node cluster (dmp=5 sys107=2) 128 minutes
Also, I didn't do any RAID. My thought was that I would rather put /tmp and /scratch on different physical drives. After I had it set up, I talked with MSC and they recommended using RAID 0. They also recommended using ext2 for /scratch rather than ext3. I changed that and my test job ran in 125 minutes.
Also, MSC NASTRAN uses a maximum of 2 GB or RAM under 32 Linux, but I have seen documentation that suggests you get a real perfomance benefit by having at least 3 GB on each node. Along this line, I couldn't get jobs to run when I specified mem=2gb, but they did run when I specified mem=500mw (500 mega words, just under 2 GB).
My impression is that performance is limited by CPU speed, not I/O. I would spend more money on faster (maybe 64 bit) processors than in optimizing the disk system.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20050411/3f235349/attachment.html>
More information about the Beowulf
mailing list