Beowulf: A theorical approach

Sean Ward SeanWard at msn.com
Fri Jun 23 01:48:16 PDT 2000


----- Original Message -----
From: Robert G. Brown <rgb at phy.duke.edu>
To: Walter B. Ligon III <walt at parl.ces.clemson.edu>
Cc: James Cownie <jcownie at etnus.com>; Nacho Ruiz <iorfr00 at student.vxu.se>;
Beowulf Mailing List <beowulf at beowulf.org>
Sent: Thursday, June 22, 2000 6:30 PM
Subject: Re: Beowulf: A theorical approach


> On Thu, 22 Jun 2000, Walter B. Ligon III wrote:
>
> > --------
> >
> > Well, yeah, but its the PCI interface I'm talking about.  Robert Brown's
> > posting was really more to the point.  Build a NIC that interfaces
> > driectly to the CPU and memory.
>
> Sure, and memory is indeed another way to do it.  Build a small
> communications computer that "fits" into a memory chip slot.  I'd guess
> that one could make the actual interface a real (but small and very fast
> -- SRAM?) memory chip that was on TWO memory buses -- the one in the
> computer in question and on the "computer" built into the interface
> whose only function is to manage communications and which would be
> strictly responsible for avoiding timing collisions -- possibly with a
> harness that allows it to generate interrupts to help even more. (Can a
> memory chip per se generate trappable interrupts now? Don't know.) Then
> accompany it with a kernel module that maps those memory addresses into
> a dedicated interface space and manages the interrupts, so the CPU only
> tries to write the memory when it is writable and read when it is
> readable.
[snip]
    From a software standpoint, that is doable. Ram cannot generate
interrupts, to my current understanding, which would leave a polling
architecture, probably on a fixed address to detect a "data received" bit
change, which would mean at least a word of transfer every n microseconds
that you want as your polling interval. Ideally, a nic driver would make
that time configurable, so people wanting to increase memory
bandwidth/reduce cpu time lost to polling could sacrifice latency.
    At least in linux, it would be fairly trivial to mask the memory offsets
assigned to the NIC-as ram module, such as using the approach in
http://home.zonnet.nl/vanrein/badram/ by kmallocing those offsets during
kernel init. The hardware approach would be similar to those (old) 36 to 72
pin ram converters, which stacked several old ram simms. Drop your NIC
logic, a dimm slot for cache, and a cable to an optical jack and hope your
case supports your ungodly high DIMM. It would mostly be an issue of
designing the SDRAM to whatever nic core you use interface which would pose
the design problems.
    Assuming you are using a 133mhz frontside bus for ram access, and are
stacking into a 128 pin sdram slot, you could theoretically be on the order
of ~1 GB per sec of IO, given the 64 bit data path on modern north bridges
like the via kx133. Existing fiber based interconnects can already provide
that, albeit with a latency penalty. Real world, throwing in addressing,
protocol and polling overhead, and the fact that most memory controllers
require interleaving data on several dimms to get full bandwidth, you might
get half the theoretical "wire speed" of the sdram DIMM. The fact that the
receive buffer would be addressable RAM would be useful for many interesting
things ;)
    The real advantage of that type of solution, is you can hack support
into any platform which uses DIMMS, provided the OS is modifiable. A CPU
socket design requires commitment to an architecture, and hence a smaller
market for this nic in a DIMM. When you couple the portrait of a mixed
revenue stream, licensing the NIC on a SIMM to other chip manufacturers,
with software sales of solutions optimized to use the hardware (think
databases and filesystems initially, because they need fast transactional
support that a high speed write to another computer provides), and you have
an attractive proposal. Finally, you would probably have a time window
wherein it was more cost effective to use the NIC on a DIMM than new bus
formats, such as PCI-X, since as always, a new tech costs more than old
tech, and with a NIC on a DIMM you only need a new ram chip, versus new io
controllers, new NICs, new motherboard designs, etc. However, I do not know
enough about PCI-X and infinaband to do anything but shoot myself in the
foot, especially WRT latencies of the different technologies.

-Sean Ward







More information about the Beowulf mailing list