Infiband (was RE: Beowulf: A theorical approach)
Tony Skjellum
tony at MPI-Softtech.Com
Thu Jun 22 10:48:57 PDT 2000
Well, this is scuttlebutt, you hear at a lot of meetings and around
water coolers. Let me not embarrass myself by saying other than I have
heard it a bunch of times. Not having paid the $10,000 to be NDAd on
Infiniband, I can't and won't say more.
We should invite an appropriate Intel or other leader in Infiniband to
provide a public briefing of what's real.
I will say that several people have mentioned that Infiniband is for
server area networks, not system area networks (ie clusters).
Tony
Anthony Skjellum, PhD, President (tony at mpi-softtech.com)
MPI Software Technology, Inc., Ste. 33, 101 S. Lafayette, Starkville, MS 39759
+1-(662)320-4300 x15; FAX: +1-(662)320-4301; http://www.mpi-softtech.com
"Best-of-breed Software for Beowulf and Easy-to-Own Commercial Clusters."
On Thu, 22 Jun 2000, Bill Moshier wrote:
> Tony - by 64-way maximum size are you implying that infiniband
> has a 64-node limit? I was under the impression that, from at
> least the hw point of view it was similar to VI Architecture,
> which is more-or-less unlimited in its interconnections.
>
> Bill
>
> -----Original Message-----
> From: Tony Skjellum [mailto:tony at MPI-Softtech.Com]
> Sent: Thursday, June 22, 2000 9:55 AM
> To: James Cownie
> Cc: Walter B. Ligon III; Nacho Ruiz; Beowulf Mailing List
> Subject: Re: Beowulf: A theorical approach
>
>
> Rumor has it that Infiniband is only a 64-way maximum size
> infrastructure... perhaps that will change over time.
>
> Anthony Skjellum, PhD, President (tony at mpi-softtech.com)
> MPI Software Technology, Inc., Ste. 33, 101 S. Lafayette, Starkville, MS
> 39759
> +1-(662)320-4300 x15; FAX: +1-(662)320-4301; http://www.mpi-softtech.com
> "Best-of-breed Software for Beowulf and Easy-to-Own Commercial Clusters."
>
> On Thu, 22 Jun 2000, James Cownie wrote:
>
> >
> > > The problem right now really isn't in link speeds (though better
> > > link speeds are good), its in how close/far the network interface is
> > > from the CPU. COTS HW doesn't place a high value on direct access
> > > to IO devices - there is a higher value on a standardized bus
> > > interface to allow different system components to be integrated and
> > > updated independently. A "supercomputer" can have the network
> > > engineered directly into the node architecture. This is a huge
> > > advantage. Luckily, this advantage has the most effect in only some
> > > programs.
> >
> > If Infiniband does all that it is supposed to do, then it will rapidly
> > become the network of choice, since it _does_ have support for direct
> > (user-space) access to the comms, and has some nifty switches.
> >
> > Of course in the short term it will be limited by the CPU side
> > interfaces being PCI, but that's only the same limitation as
> > for Quadrics, Myrinet, SCI and so on.
> >
> > Once it becomes the standard for connection to storage it _should_ be
> > cheap, and a standard component of any "server-class" commodity
> > machine, whether IA* or other architecture. (IBM announced that
> > they'll be selling their interface chips and switches just today).
> >
> > So, I expect that Inifinband will be engineered intimately into the
> > node architecture of COTS hardware, and that will help a lot.
> >
> > It'll be interesting how long it takes before the Linux drivers are
> > available !
> >
> > -- Jim
> >
> > James Cownie <jcownie at etnus.com>
> > Etnus, Inc. +44 117 9071438
> > http://www.etnus.com
> >
> >
> > _______________________________________________
> > Beowulf mailing list
> > Beowulf at beowulf.org
> > http://www.beowulf.org/mailman/listinfo/beowulf
> >
>
>
> _______________________________________________
> Beowulf mailing list
> Beowulf at beowulf.org
> http://www.beowulf.org/mailman/listinfo/beowulf
>
> _______________________________________________
> Beowulf mailing list
> Beowulf at beowulf.org
> http://www.beowulf.org/mailman/listinfo/beowulf
>
More information about the Beowulf
mailing list