serguei.patchkovskii at sympatico.ca
Sun Jun 16 07:53:38 PDT 2002
----- Original Message -----
Ole W. Saastad" <ole at scali.com> wrote:
> with this talk about scalability and switches I would like to
> point out that the SCI interconnect uses no switch.
> The only thing you need to add an extra compute nodes
> and just recable the cluster. The cost increases linearly with
> the number of nodes. There are no step costs when you must buy
> more switch ports.
While this sounds more attractive than Myrinet in theory, the practice
may (or may not) turn out to be a little bit different: The number of
nodes you -want- to have on a single SCI ring is much lower than
the number of nodes theoretically possible. AFAIK, the SSP software
limits the number of nodes on a single ring to 256 - however, in most
cases you'll start seeing significant performance degradation after 10;
for our applications, I won't put more than 6 nodes on a ring.
Once you hit the limit for the 1D configuration, adding more nodes
requires going to 2D - which means not just adding another node
with an SCI interface card, but also replacing/upgrading cards in
all existing nodes. Once you hit the performance limit of 2D config
(which, for our jobs, should be somewhere around 36 nodes - but,
in any case, won't be much farther than 100), you'll need to ugrade
to a 3D torus. Again, this would mean replacing -all- existing SCI
cards (and getting a lot of long SCI cables - which are not cheap at
In summary, -if- adding more nodes to your SCI cluster can be
done without changing the topology, the cost per node is linear
(just like with a Myrinet switch, which still has some spare ports).
On the other hand, if a change in topology is required, the cost of
adding one node is proportional to the number of the existing
nodes (just as if you had to replace a Myrinet switch).
More information about the Beowulf