IEEE 1394
Eugen Leitl
eugen at leitl.org
Thu Dec 5 07:43:30 PST 2002
On Thu, 5 Dec 2002, Eray Ozkural wrote:
> A while ago I had asked whether there were any existing clusters using
> firewire IIRC. I had also found a similar query on this list, asked some time
> before me, but I don't have the link right now.
Have you seen
http://www.ultraviolet.org/mail-archives/beowulf.2002/2977.html
?
> I had even developed a design, unfortunately no professors had shown interest
> in it at Bilkent. There are interface cards containing 3 firewire ports with
It is most interesting to use with motherboards with onboard IEEE 1394.
> aforementioned bandwidth/latency characteristics which makes them excellent
> point-to-point connection devices. With a suitably high performance kernel
I've found another bit of info after posting to the list, which however
looks proprietary. They claim "Asynchronous packet round trip, real-time
thread to real-time thread and back is 110 microseconds worst case."
http://www.fsmlabs.com/about/news_item.htm?record_id=48
Real-Time IEEE 1394 Driver from FSMLabs
Applications to industrial and machine control and clusters
November 12 2002, Socorro, NM. FSMLabs announces the immediate
availability of a full function OHCI IEEE 1394 driver for the RTLinux/Pro
Operating System. The driver supports asynchronous and isochronous modes
and bus configuration and is available with FSMLabs Lnet networking
package that also support Ethernet. The zero copy variant of the UNIX
standard socket interface allows application code to have full access to
the packets and build application stacks without forcing packet copy.
Asynchronous packet round trip, real-time thread to real-time thread and
back is 110 microseconds worst case. The driver is currently being used by
FSMLabs customers who employ 1394 as an instrument control bus, but
real-time 1394 has applications in fields such as multimedia, robotics,
and enterprise (where it can be used for fault tolerance). As an example,
United Technologies uses the RTLinux 1394 support to bridge control
systems and VME/shared memory systems, taking advantage of the high data
movement rates of the 1394 bus to synchronize with shared memory on PCI
control systems. FSMLabs Network Architect, Justin Weaver said: "The
driver exposes the flexibility of 1394, which can provide both very low
latency packet transmission and high data rates at the same time."
Driver functions include:
* * Asynchronous requests and responses
* * Isochronous stream packets with ability to tune contexts to
specific or multiple channels.
* * Asynchronous stream packets
* * Up to 32 isochronous receive contexts and same number of transmit
contexts.
* * Cycle master capability.
* * IRM capability and Bus Manager topology map control.
* * Up to 63 nodes per bus and up to 16 ports per node.
About RTLinux/Pro and RTCore
RTLinux/Pro provides FSMLabs RTCore POSIX PS51 robust "hard" real-time
kernel with a full embedded Linux development system. RTCore employs a
patented dual kernel technique to run Linux or BSD Unix as applications.
Hard real-time software runs at hardware speeds while the full power of an
open-source UNIX is available to non-real-time components. RTLinux/Pro is
used for everything from satellite controllers, telescopes, and jet engine
test stands to routers and computer graphics. RTLinux/Pro runs on a wide
range of platforms from high end clusters of multiprocessor P4s/Athlons to
low power devices like the MPC860, Elan 520, and ARM7.
> router, this would make the construction of high performance static-network
> distributed memory machines an ordinary feat.
>
> Each node would have 2 of those interface cards, totaling to 6 firewire ports.
> 64 nodes can be connected in hybercube topology resulting in a high
> performance supercomputer.
>
> If anybody wants me to come and help build it, just send me a job offer :)
There are solutions like http://www.disi.unige.it/project/gamma/mpigamma/
Hardware requirements
A pool of uniprocessor PCs with Intel Pentium, AMD K6, or superior CPU
models.
Each PC should have a Fast Ethernet or Gigabit Ethernet NIC supported by
GAMMA.
Currently supported Fast Ethernet NICs are: 3COM 3c905[rev.B, B, C], any
adapter equipped with the DEC DS21143 / Intel DS21145 ``tulip'' chipsets
and clones, Intel EtherExpress Pro/100.
Currently supported Gigabit Ethernet NICs are: Alteon AceNIC and its
clones (3COM 3c985, Netgear GA620), Netgear GA621 (and possibly GA622).
You should also connect all PCs by a Fast Ethernet or Gigabit Ethernet
switch, or by a Fast Ethernet repeater hub, (or by a simple cross-over
cable, for a minimal cluster of two PCs).
They claim 35 to 10.5 us userland latency:
http://www.disi.unige.it/project/gamma/mpigamma/#GE
Given how cheap GBit Ethernet switches are getting, there's really no
point in going IEEE 1394 on a large scale, unless your motherboard happens
to have it for free along with the Ethernet ports, and your cluster is
small.
More information about the Beowulf
mailing list