SCI, Myrinet Benchmarks using EFF_BW

Jason Holmes jholmes at psu.edu
Thu May 18 07:24:13 PDT 2000


If anyone wants to see more benchmarks, I have a few 'raw' plots of
Pallas and Nas MPI benchmarks stored at

  http://www.personal.psu.edu/jwh128/benchmarks

They'll eventually become part of an official webpage and have at least
a minimal description of what's going on, but for now, they're just
png's and gif's.

Useful information:

ScaMPI version: 1.9.1 (RPM)
GM version: 1.1.2
MPI-GM version: mpich-1.1.2..11 (-opt=-O2 -DGM_DEBUG=1
-gm-can-register-memory)
Fast Ethernet: mpich-1.2.0 (Intel EEpro's, switched).
OS: linux-2.2.13
Nodes: Dual PIII 500's w/ 1GB RAM

For the graphs themselves, myri-1 means 1 process per dual-cpu node
whereas myri-2 means 2 processes per dual CPU node.  On the Pallas
plots, "Internal Communication" means inside one dual CPU node. 
"External Communication" means between two dual CPU nodes.

Unfortunately, we only have 8 SCI cards on loan, so the single CPU
benchmarks end at 8 processes and the dual CPU benchmarks end at 16
(though we are very grateful to have any on loan at all :).

--
Jason Holmes





More information about the Beowulf mailing list