[Beowulf] 1 multicore machine cluster

Peter St. John peter.st.john at gmail.com
Wed Apr 22 11:56:19 PDT 2009


I've thought about this a little bit ("what's a node?"). Consider two kinds
of applications; compute intensive vs communication intensive.

We use to generate pseudorandom numbers by starting with an N-digit number
S, squaring it, taking the middle N digits, repeat. One could ask, what is
the period for a particular seed S? you could perform the process until you
get S back, or zero. Pure compuation, no data, no contention for cache much
less the NIC. Contrast to n-body mechanics where each  planet (whatever)
reports it's position and velocity to every other planet, evey time
increment. Lots of need for interprocess communication.

So I think from the point of view of the former app, a single independent
thread (such as a hypertrhread on an I7?) is a "node", and a board with
multiple sockets, multiple cores per socket, and multiple independent
threads per core, has many "nodes" per NIC. But from the point of view of
the latter type of application, only the NIC level board is a node.

So I distinguish in my mind between "quantum" nodes and "fat" nodes. A
smart, complex application may want to work at multiple layers, assigning
certain jobs to fat nodss and others to quanta, with other levels possible
in between.

Peter

On 4/22/09, Douglas Eadline <deadline at eadline.org> wrote:
>
>
> This is an interesting question. As multi-core become more
> pervasive this will beg the bigger question. What is cluster?
> Recall, there is a design called a "constellation" where the
> number of cores on the nodes is greater than the number of nodes.
> Therefore if you have four 8-core nodes (32 cores total)
> connected with IB, you have a "constellation cluster".
> The "parallelism" may be more in the nodes, than between
> the nodes.
>
> In any case, you have a pile of cores how do you program
> them? Fortunately MPI will work on multi-core and across nodes.
> For the most part, OpenMP and threads only work on single
> motherboards. I investigated this idea (MPI vs OpenMP on
> a single multi-core node) and wrote up my results here:
>
>   http://www.linux-mag.com/id/4608
>
> Of course, there is a need for more testing
> with different compilers and hardware platforms,
> but it is clear, MPI on multi-core SMP is not
> necessarily a bad idea, in some cases it is a good
> idea. There are some who may argue this, but data points
> are really the only thing worth discussing.
>
> I'll have some new hardware in May and I plan on
> re-running the tests mentioned in the article.
>
> --
> Doug
>
>
> > is it possible to have a single multicored machine as a cluster?
> >
> > --
> > Jonathan Aquilina
> >
>
> > --
> > This message has been scanned for viruses and
> > dangerous content by MailScanner, and is
> > believed to be clean.
>
> >
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> > To change your subscription (digest mode or unsubscribe) visit
> > http://www.beowulf.org/mailman/listinfo/beowulf
> >
>
>
> --
>
> Doug
>
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20090422/06ea7387/attachment.html>


More information about the Beowulf mailing list