Beowulf & Fluid Mechanics

Jim Forsythe jrforsythe at msn.com
Fri Jun 30 08:16:13 PDT 2000


     The unstructured code Cobalt60 has a linux version now, supplied by
myself at the Air Force Academy.  The largest linux cluster we have run on
is 44 processors on 22 nodes.  It scaled linearly on that cluster (i.e. 100%
parallel efficiency) on a 2 million cell grid.  They recently ran this code
on 1024 processors of an SP3 and got over 98% efficiency (on a 3.2 million
cell grid).  For this code, it seems to stay linear until you get too few
cells on a processor.  For the expensive machines, this is about 2000 cells
per processor.  For our cluster it is around 8000.  Our cluster is 500Mhz
PIII, with 100Bt.  We were shocked that we got a linear speedup on 100Bt -
we were exepecting to have to buy Myrinet or gigabit.  The domain
decomposition is done by PARmetis, which seems to do a great job in load
balancing and giving minimum number of faces on the interface between
processors.
	Per processor our cluster is roughly equivalent to a more recent SP2, or a
225MhZ Origin 2000.  The SP3 is about 50% faster, and the T3E is about 50%
slower.  So with a linear speedup, and good per processor performance, we
couldn't be happier with our cluster.

There is a Cobalt page at:
http://www.va.afrl.af.mil/vaa/vaac/COBALT/


Jim Forsythe
USAF Academy

-----Original Message-----
From: beowulf-admin at beowulf.org [mailto:beowulf-admin at beowulf.org]On
Behalf Of William Gropp
Sent: Friday, June 30, 2000 8:00 AM
To: Nicolas Lardjane
Cc: beowulf at beowulf.org
Subject: Re: Beowulf & Fluid Mechanics


At 09:53 AM 6/30/2000 +0200, Nicolas Lardjane wrote:
>Hello.
>
>I'd like to know if someone has any experience of using PC cluster for
>solving fluid mechanics problems by domain decomposition methods. The
>question is what performance can be expected compared to super-computers ?

A fully implicit, unstructured CFD code was the subject of
http://www.mcs.anl.gov/~gropp/papers/sc99/final-bell-12-4.pdf ; look at the
ASCI Red results in comparison with the other (non-vector) supercomputer
results.  Similar results have been seen on clusters with Myrinet.

Bill


_______________________________________________
Beowulf mailing list
Beowulf at beowulf.org
http://www.beowulf.org/mailman/listinfo/beowulf






More information about the Beowulf mailing list