[Beowulf] How to configure a cluster network

Nifty niftyompi Mitch niftyompi at niftyegg.com
Thu Jul 24 18:17:19 PDT 2008

On Thu, Jul 24, 2008 at 06:39:00PM -0400, Mark Hahn wrote:
>> Your point about "most people don't need" is important!   With large
>> multi core, multiple socket systems external and internal bandwidth
>> can be interesting to ponder.
> that makes it sound like inter-node networks in general are doomed ;)
> while cores-per-node is increasing, users love to increase cores-per-job.

Not doomed but currently limiting.

But, with CPU core to CPU core and socket to socket memory improvements
who knows.   Another shared commons inside a chassis to factor-in is cache
memory.  For some time AMD had an advantage on core to core and socket
to socket communication but that can change quickly.  

Still we do not like to link IB switches with a single cable so why
should we limit eight cores in a single chassis to the bandwidth of
a single cable.  The more cores that hide behind a link the more the
bandwidth has to be shared by those cores (MPI ranks).

In practice many applications need not contend on the wire for rank to rank
communication at the exactly the same time so YMMV.

This reminds me to ask about all the Xen questions.... Virtual machines
(sans dynamic migration) seem to address the inverse of the problem that
MPI and other computational clustering solutions address.   Virtual machines
assume that the hardware is vastly more worthy than the OS and application
where Beowulf style clustering exists because the hardware is one N'th what is necessary
to get to the solution.

Where does Xen and other VME (not the system bus) solutions play in Beowulf land.

The "virtual machine environment" stuff will enable CPU vendors to add
more cores to a box but how does that help/hurt an MPI cluster environment?

	T o m  M i t c h e l l 
	Looking for a place to hang my hat.

More information about the Beowulf mailing list