<HTML><BODY style="word-wrap: break-word; -khtml-nbsp-mode: space; -khtml-line-break: after-white-space; "><BR><DIV><DIV>On Mar 9, 2006, at 7:32 PM, <A href="mailto:beowulf-request@beowulf.org">beowulf-request@beowulf.org</A> wrote:</DIV><BR class="Apple-interchange-newline"><BLOCKQUOTE type="cite"><P style="margin: 0.0px 0.0px 0.0px 0.0px"><FONT face="Lucida Grande" size="4" style="font: 13.0px Lucida Grande">Infiniband with DDR is already at 20Gbps over CX4 copper</FONT></P> </BLOCKQUOTE></DIV><BR><DIV> <DIV>" <FONT class="Apple-style-span" face="Verdana">The 4X InfiniBand protocol extends the existing 1X protocol by supporting up to four 2.5Gb/sec dual-simplex connections for an effective duplex transmission speed of 10Gb/sec. </FONT><FONT class="Apple-style-span" face="Verdana">... " (From: <FONT class="Apple-style-span" face="Lucida Grande"><A href="http://www.lecroy.com/tm/products/ProtocolAnalyzers/infiniband.asp?menuid=62">http://www.lecroy.com/tm/products/ProtocolAnalyzers/infiniband.asp?menuid=62</A> )</FONT></FONT></DIV><DIV><FONT class="Apple-style-span" face="Verdana"><BR class="khtml-block-placeholder"></FONT></DIV><DIV><FONT class="Apple-style-span" face="Verdana">I did not mean to imply that this performance level was not possible or good or valuable ... just not "cost effective" considering: energy budget (line length v. energy consumed), upper limits, etc. ... Infiniband (and others) can be pushed even further over silver conductors (or carbon or super conductors) ... Heck, you could make the above case for any protocol performance pushed to the upper limits of meat space (physical realities). Consider a protocol as being reliable or workable, when changing the means of connectivity to another medium. Optics can open up an order of magnitude performance gain = electrons v. photons, metal v. glass (or air or other) ... beyond a bus speed of 100 or even 500 Gbits / second @ a lower energy requirement ... Incremental performance improvements of late over metal conductors merely prove the need ... metal conductor data transmission is falling behind Moore's "law".</FONT></DIV><DIV><FONT class="Apple-style-span" face="Verdana"><BR class="khtml-block-placeholder"></FONT></DIV><DIV><FONT class="Apple-style-span" face="Verdana">This is going to become very important, very soon as more advanced, dramatically improved performance Beowulf systems are built. My vote would be for the most hardware / firmware efficient protocol considering energy budget v. performance v. the space allowed. (Cray, IBM, et al, currently build clusters that have hundreds of horsepower devoted to heat </FONT><FONT class="Apple-style-span" face="Verdana">dissipation ... just because they all use metal conductors for the bus. Imagine having a twin engined aircraft running inside your server farm ... being something most of mortals can not afford, let alone survive.)</FONT></DIV><DIV><FONT class="Apple-style-span" face="Verdana"><BR class="khtml-block-placeholder"></FONT></DIV><DIV><FONT class="Apple-style-span" face="Verdana">I still pose the question: which hardware protocol would be optimum for tight clusters of processors sharing a common bus (or other local hardware network) in a Standard Temperature & Pressure environment? </FONT></DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>"Raw Data" transmission ala legacy serial or parallel or SCSI or other ... ?</DIV><DIV>"Packet Switching Data" transmission ala Ethernet or USB or FireWire or other ...?</DIV><DIV>A "yet to be determined data" transmission methodology / topography ... ?</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>Ed Karns</DIV><DIV>FireWireStuff.com </DIV><DIV><BR class="khtml-block-placeholder"></DIV></DIV></BODY></HTML>