Myrinet vs. Dolphin

Jared Hodge jared_hodge at iat.utexas.edu
Fri Jan 26 06:11:24 PST 2001


	I appreciate everyone's quick responses to my question.  As a result, I
have become quite a bit more educated in high speed networking.  Perhaps
I should clarify our problem a little more before there are any more
responses.  We are currently using the M2M-PCI64A Myrinet equipment,
which is not the most current equipment from Myricom.  They now have
their Myrinet 2000 equipment (a.k.a. M3M-PCI64B) including newer
switches which scale better than the older equipment, although I did not
know this because they haven't updated their web page in six months. 
The bad part is that the while the Myrinet 2000 equipment is backward
compatible with the older version, they are no longer selling the older
equipment, so I'm forced to pay for new, bleeding edge technology when
I'll only be getting the performance of the older technology.  They have
no plan (as far as I know) for migrating current customers off of the
bleeding edge to a lower price bracket.  They just keep selling newer
and better hardware for the same price, which is good if you want to buy
a new system, but for upgrading our current cluster, it's not good. 
Speaking from a pragmatic point of view (I know the scientists among you
appreciate that) we bought the most up to date Myrinet equipment at the
time because we needed that performance to solve our problems.  We were
forced onto the bleeding edge by our performance demands.  If we had
known a year ago that we would still have to pay the same amount for the
same performance now, we probably would have chosen a different option.
	Also, in response to Patrick's response.  We were told by Myrinet
(specifically David PeGan) that in order to get to 24 nodes we would
need to do the following (direct quote from E-mail):


"Add two more switches identical to the one you have (M2LM-SW16)
Add 16 more adapter cards (M2L-PCI64B-2)
Purchase enough LAN cables to bring your total to 24
Purchase enough SAN cables to bring your total to 12
The topology would be as follows:
You would use the 8 SAN ports on each switch for the purpose of
inter-switch
connection. You would connect 4 SAN ports on switch A to 4 on switch B
and 4
to switch C. You would then connect the remaining 4 SAN ports between
switch
B and C.
You would then connect 8 nodes to each switch using the LAN ports."

	That makes 3 16-port switches to get to 24 nodes, or $15,000 worth of
switches, in addition to the $26,000 worth of network cards.  That is a
scalability problem.

	I hope that no-one else on the list get stuck in this problem, but it's
something that you might want to consider if you've got a small cluster
and not the very latest Myrinet equipment.  I'm hoping Myricom will
solve this problem by offering some sort of trade in program or selling
their surplus older equipment at a lower price.  A trade in program may
seem like a foolish option, but there are probably many other groups out
there besides mine that would like to migrate off of the bleeding edge
of technology, but this could really only be done through Myricom, since
we would need some kind of guarantee that we would get the needed
performance out of them.  Maybe they could have their engineers check
them out and approve them as Myricom Certified Used or something like
that (sounds like something that would be done with a car, but hey this
stuff actually costs more than my car).  Are there other people with
this problem?
Patrick Geoffray wrote:
> 
> Jared Hodge wrote:
> 
> > well.  I have several questions though.  First, we are considering
> > upgrading our cluster with more nodes, and myrinet seems to have a
> > scalability problem in the average cluster size area.  We have 8 nodes
> 
> Hi,
> 
> There is no scalability problem in the average cluster size area :-)
> It's clear that if you have a 16 ports switch, you will have to get a
> new one to connect more than 16 nodes. With two 16 nodes switches, you
> can connect up to 30 nodes (one link between the switches) but one link
> between 2 switches is a very very poor design (bad bissection). You can
> then use more than one link and the mapper with balance the coms between
> these inter-switch connections.
> 
> The new product (Myrinet 2000 switch M3M) is in production now and will
> be added soon on the price list on the web. This new switch is composed
> of a rack and slots than contains 8 ports each (8 fibers or 8 Serial or
> 8 SAN). You can get a huge rack (9U) and buy only one slot with 8 ports
> to connect 8 nodes. When you want to upgrade to 32 nodes, you buy 3 more
> slots. You can mix link types (one slot fibers and one slot SAN for
> example).
> Each slot contains a 16 ports full cross-bar and 8 links are connected
> to the backplane in the back of the rack in a Clos network (maximum
> bissection).
> 
> Ask Myricom sales people for more information.
> 
> > I can tell.  Dolphin's switchless technology appears inviting in this
> 
> Without switches and with 2 ports per card, you have to do a grid, that
> means you share links for coms between differents nodes. It may be
> enough for regular applications that talk only to the one-step
> neighbours.
> Some people believe it's scalable, some people don't. I don't.
> 
> Of course, I am biaised in my comments :-)
> 
> Hope it gives more information
> 
> --
> Patrick Geoffray
> 
> ---------------------------------------------------------------
> |      Myricom Inc       |  University of Tennessee - CS Dept |
> | 325 N Santa Anita Ave. |   Suite 203, 1122 Volunteer Blvd.  |
> |   Arcadia, CA 91006    |      Knoxville, TN 37996-3450      |
> |     (626) 821-5555     |      Tel/Fax : (865) 974-0482      |
> ---------------------------------------------------------------
> 
> _______________________________________________
> Beowulf mailing list
> Beowulf at beowulf.org
> http://www.beowulf.org/mailman/listinfo/beowulf

-- 
Jared Hodge
Institute for Advanced Technology
The University of Texas at Austin
3925 W. Braker Lane, Suite 400
Austin, Texas 78759

Phone: 512-232-4460
FAX: 512-471-9096
Email: Jared_Hodge at iat.utexas.edu




More information about the Beowulf mailing list