BEOWULF cluster hangs
Roger L. Smith
roger at ERC.MsState.Edu
Thu Sep 26 14:39:20 PDT 2002
On Thu, 26 Sep 2002, Jeffrey B. Layton wrote:
> It depends on the CFD code. The code we use scales very well
> with just plain FastE. At 200 processors on FastE we get about 90%
> of the theoretical scaling. Oh, it's an external aerodynamics CFD
> code (unstructured inviscid/viscous). The above scaling number
> is for a viscous run.
>
> When we tested Myrinet we got less than a 1% improvement
> in speed (wall clock time for an entire run).
We've seen almost identical numbers to those you listed for our CFD codes
as well. We tested both Myrinet and FastE on our 64 processor (16 node)
Sun/Solaris cluster, and opted to skip Myrinet and buy more nodes when we
built our Linux cluster (1038 processors, 519 nodes, FastE to each node,
GigE between the switches in each rack).
_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_
| Roger L. Smith Phone: 662-325-3625 |
| Systems Administrator FAX: 662-325-7692 |
| roger at ERC.MsState.Edu http://WWW.ERC.MsState.Edu/~roger |
| Mississippi State University |
|_______________________Engineering Research Center_______________________|
More information about the Beowulf
mailing list