SCI, Myrinet Benchmarks using EFF_BW
sp at scali.no
Tue May 23 10:01:09 PDT 2000
Jason Holmes wrote:
> If anyone wants to see more benchmarks, I have a few 'raw' plots of
> Pallas and Nas MPI benchmarks stored at
> They'll eventually become part of an official webpage and have at least
> a minimal description of what's going on, but for now, they're just
> png's and gif's.
If you take a look at http://www.scali.com/performance/index.html under
the section "Pallas PMB", maybe you could grab some ideas (description
of what's going on etc.).
These tests were run on a 16 node dual 450 PIII system equipped with
SCI. (comparing different versions of ScaMPI)
> Useful information:
> ScaMPI version: 1.9.1 (RPM)
> GM version: 1.1.2
> MPI-GM version: mpich-1.1.2..11 (-opt=-O2 -DGM_DEBUG=1
> Fast Ethernet: mpich-1.2.0 (Intel EEpro's, switched).
> OS: linux-2.2.13
> Nodes: Dual PIII 500's w/ 1GB RAM
> For the graphs themselves, myri-1 means 1 process per dual-cpu node
> whereas myri-2 means 2 processes per dual CPU node. On the Pallas
> plots, "Internal Communication" means inside one dual CPU node.
> "External Communication" means between two dual CPU nodes.
> Unfortunately, we only have 8 SCI cards on loan, so the single CPU
> benchmarks end at 8 processes and the dual CPU benchmarks end at 16
> (though we are very grateful to have any on loan at all :).
Are some of you guys with your hands on different interconnect adapters,
interested in running the complete Pallas MPI benchmark (other
benchmarks as well) some time ? I think the results could be of interest
to everyone on this list.
Steffen Persvold Systems Engineer
Email : mailto:sp at scali.no Scali AS (http://www.scali.com)
Tlf : (+47) 22 62 89 50 Olaf Helsets vei 6
Fax : (+47) 22 62 89 51 N-0621 Oslo, Norway
More information about the Beowulf