[Beowulf] 1 multicore machine cluster

Prentice Bisbal prentice at ias.edu
Fri Apr 24 06:16:53 PDT 2009

Glen Beane wrote:
> On 4/24/09 3:03 AM, "Jonathan Aquilina" <eagles051387 at gmail.com> wrote:
>     im impressed with the different views everyone has. i dont know how
>     many of you would agree with me a multicore processor lets say a
>     quad is 4 nodes in one. could one say it like that?
> I would not.  To me a node is a physical thing.  

I would disagree, slightly.  I would say that a node is a single system
image. That is, it's one one image of the operating system. The physical
boundary is a good rule of thumb, but doesn't always work.

I used to have an Origin 350 with 8 processors. There were two "nodes"
(in SGI's terms, hence the quotes). One "node" was the main node with
the I/O stuff, and the other was just a container for the 4 additional
processors and their RAM. The two nodes were connected by a NUMALink®
cable, so they were in separate physical containers, similar to separate
nodes connected with IB, yet had a single system image, and I
administered it as a single 8-way system. Using the more general
definition of node, I would call that system a single node.

ScaleMP, which makes multiple systems connected by IB behave as a single
system image by means of a BIOS overlay (if that's the right term), also
blurs the lines of physical boundaries when defining a node.

SiCortex blurs the line in the other direction. Their deskside system
has 72 of their RISC processors in it, but has 6 "nodes", each with 12
cores running a separate instance of the operating system. And then
there's the "master" node (the one that provides the UI and
cross-compiler), that runs on AMD64 processor(s).

I agree with your MPI grip - the concept of a node is irrelevant in MPI
the MPI programming paradigm, your programming around the concept of
independent processes that talk to each other. If they're on the same
node, that's for the implementation to deal with.


More information about the Beowulf mailing list