SCI, Myrinet Benchmarks using EFF_BW
jholmes at psu.edu
Thu May 18 07:24:13 PDT 2000
If anyone wants to see more benchmarks, I have a few 'raw' plots of
Pallas and Nas MPI benchmarks stored at
They'll eventually become part of an official webpage and have at least
a minimal description of what's going on, but for now, they're just
png's and gif's.
ScaMPI version: 1.9.1 (RPM)
GM version: 1.1.2
MPI-GM version: mpich-1.1.2..11 (-opt=-O2 -DGM_DEBUG=1
Fast Ethernet: mpich-1.2.0 (Intel EEpro's, switched).
Nodes: Dual PIII 500's w/ 1GB RAM
For the graphs themselves, myri-1 means 1 process per dual-cpu node
whereas myri-2 means 2 processes per dual CPU node. On the Pallas
plots, "Internal Communication" means inside one dual CPU node.
"External Communication" means between two dual CPU nodes.
Unfortunately, we only have 8 SCI cards on loan, so the single CPU
benchmarks end at 8 processes and the dual CPU benchmarks end at 16
(though we are very grateful to have any on loan at all :).
More information about the Beowulf