[Beowulf] Win64 Clusters!!!!!!!!!!!!

Robert G. Brown rgb at phy.duke.edu
Thu Apr 12 10:48:44 PDT 2007


On Thu, 12 Apr 2007, Peter St. John wrote:

> I propose we bifurcate into two threads (both of which may be done!).
>
> 1. Thesis: 64 bit good. We are all agreed now, case closed, IMO.

:-)

>
> 2. Thesis: no group of human beings will ever directly author source code
> (meant to compile together) in excess of 4GB.
>
> I think we agree with RGB that 2 is irrelevant to 1. It may have some
> amusement value of it's own however, as RGB seems adamant to agree with Jon
> that 2) is true, when I have already proven it is false :-) So if we want to
> debate (2), independently of any concern about which CPU, 32 or 64, to
> choose for clusters, then by all means bring it on :-)

Not "will ever" on my part.  "Has".  With a possible set of exceptions
even now that if demonstrated, are still likely "measure zero" in the
space of all software ever written.  It just isn't really worth arguing
about -- as I attempted to point out (and will do so again:-) the proper
unit for discussion and addressing isn't "the program" (as in a
standalone program, whatever that means in an operating-system driven
kernel environment) in the first place.  It isn't even clear what it
SHOULD be -- maybe the sum total of code resident on any given computer
at one time, maybe the sum total of code resident on the enter Internet.
It depends on the point of the discussion.  One has to consider
hierarchical decomposition or one will end up talking about just one
level as if it is "the level" at which complexity must be developed or
assessed.  This is not the case.

Modern computers are totally modular on the ware side.  There is the
bootloader and bios (firmware) and device firmware.  There is the
kernel.  There are all the kernel modules (device drivers and more).
There are the shared libraries.  There are the "essential" programs that
are effectively part of the operating system.  There is the GUI base
(e.g. X), the network base, the disk base.  Then there is userspace.
Within userspace there is the WM.  There are all the basic window apps.
There are the user-selected interactive applications.  There are shells
and shell windows (e.g. xterms).  There are tasks forked off of these
interfaces.  There are tasks forked off by cron and friends.  Most of
these use parts of the resident libraries, and add new libraries to the
list of those loaded at any time.  Unused libraries are cleared out to
free up memory blocks.  Data is buffered and/or cached to provide the
illusion of low latency and high speed whereever possible.

Our perception of a single binary as "a program" that stands alone is
simply that, a possibly convenient perception, one inherited to large
measure from the old days when "personal computers" DID only one thing
at a time.  What is a program?  The core stripped binary?  The
static-linked equivalent unstripped binary?  The binary plus all the the
associated binary code elements invoked by the program when it e.g.
invokes X functions and/or the network stack?  All of this plus the
kernel that is in fact in charge of when the program gets put onto the
CPU, that actually moves the program around in memory as needed in order
to free up large blocks of memory on demand?  When we include the kernel
and all of those functional subsystems involved in running my "program",
do we need to include all of the other programs that run in competition
or collaboration with it, perhaps sharing libraries with it, perhaps
communicating with it?

Unix makes complex tasks out of many simple tasks.  This is the thing
that puts the lie to the idea that writing "big programs" leads to a
"catastrophe" (in the mathematical sense) associated with management of
the complexity.  Unix (etc, not just Unix) decomposes the complexity
into many smaller, less complex entities (in a completely recursive
way).  When I use "printf("Hello, World!\n"); in a tiny C program, I
invoke an entire hierarchy of code fragments with clear boundaries and
responsibilities.  Each one is fairly simple -- together they perform an
amazingly complex task (as anybody knows who has ever tried to write
"printf" like output directly into the memory of a video adapter via a
low level interface in raw assembler) and reduce it AGAIN to simplicity.
Even assembler is just a "front" for CPU-based microcode which is itself
modular.  Way, way down there is the real "program" -- the physical
signals bouncing around on silicon in a curious pattern that only makes
"sense" to humans watching a screen and typing on a keyboard.

This is where I really, really agree with Peter that is by no means
clear that humans cannot write "programs" that involve "code resources"
of nearly arbitrary size.  "Complexity" is certainly not a boundary of
any sort to this process.  Those code resources are ALWAYS a series of
encapsulations of complexity so that any argument that "humans cannot
deal with the complexity of programs beyond size X" is simply incorrect.
If by asserting complexity as a barrier to human accomplishment you mean
that I personally cannot start with CPU microcode and end up with Open
Office, hey, I personally cannot start with CPU microcode and end up
with >>assembler<< (even though I actually took a course in doing just
that 33 years ago:-) but some people can and have done it so I don't
have to, others have taken the assembler and written compilers, still
others have written libraries and operating system kernels and UIs, all
of which are available to be "programmed" by me with just a single
statement. NO higher level human programs are written "alone" -- they
ALL invoke a hierarchical encapsulation of the work of many, many
others.

So, in five or ten years, when perhaps voice recognition and some fancy
new HAL-like AI has added a few GB of memory resident code to a kernel
image that you can talk to to invoke programs and that anticipates your
commands by pre-evaluating results and buffering them in case you follow
predicted operational pathwayss, when X has bloated out to a GB all by
itself to handle ultra-high resolution pen-driven touchpads that can do
handwriting recognition and drawings and paintings at 600dpi resolutions
and 64-bit color depths, when a mathematician might routinely do group
theory on truly immense groups integrated with dynamical theory (e.g.
string theory) on truly immense spaces on a 4 GB video space with voice
control and interactive 3d projective visualization, when one WRITES the
program that links to X, to 3DVR, to voice recognition, to group theory,
to number theory, to linear algebra and vector analysis and ode solution
in large dimensions as something that is only a few lines long (as it is
all built into, say, mathematica) where mathematica uses somewhat more
lines to invoke each of these, and each of these uses still more lines
to invoke still lower subsystems, all the way down the hierarchy to the
microcode that picks up a signal from pixel FFE0A719 or preprocesses the
audio signal across the following 100 usec interval and "doing the right
immensely complex thing" purely automatically -- is THAT program
guaranteed not to be bigger than 2^32 bytes in size?  (Top that for a
rhetorical sentence, Peter:-)

I think not.  Right NOW I don't think we're there, but it is simply not
correct to state that humans won't get there because there is some sort
of "complexity barrier" that Humans are Not Meant to Cross.  Humans cope
with complexity in all aspects of semantic discourse by algebraic
encapsulation and hierarchical organization that PREVENTS the human
brain's "immediate" capacity to deal with complexity from being
overwhelmed and at the same time work is partitioned so different people
can contribute different parts to an amazingly complex system.

Which even draws the whole thing back to clusters.  Clusters have always
been a way of accomplishing just such a partitioning.  When one counts
the size of "a single task" does one multiply by N when running it on an
N-node cluster?  Obviously the memory addressing is independently
segmented at the hardware level and may well be segmented in the program
as well, but there is little doubt that if one is running code on a 32K
node cluster the "program" being run can occupy MB per node.  Deep Blue
therefore has run "programs" that long ago exceeded the 32 bit threshold
IF you unravel them and tried to run them as serial tasks.  Clustering
is all about hierarchical problem decomposition into coupled subtasks.
On a more mundane scale, when I invoke google from my laptop, is the
code image (in terms of complexity) limited to the code that parses my
keystrokes into my browser, or does it extend down through X to the
network and kernel etc?  Does it extend OVER the network to the massive
cluster on the far side and the immense code base and data base there?
Does it extend via google's webcrawlers over the entire Internet and all
of the code running on all of the webservers that are invoked, hashed,
indexed, filed, and hypperrapidly retrieved?

The interesting thing as Peter notes, isn't the idea that when I compile
a dynamically linked hello world program it is unlikely to produce a
runtime image bigger than a certain size.  It is just how VAST the REAL
"runtime image" is associated with hierarchical extension of all the
subprograms invoked to accomplish a little thing like asking google to
look up "Isaac Asimov" to discover how many books he DID write in real
time while writing yet another essay on the extreme (linux) limits of
computing on a system that is very definitely chewing gum and talking at
the same time as it runs its 150 odd tasks with operational PIDs,
juggles any number of devices and interrupts and complex subsystems,
constantly rearranges memory and rewrites the disks to de-fragment files
and optimize access times, maintains umpty tables, all MIRACULOUSLY
complex but so beautifully organized that the illusion of simplicity and
bug-free functionality is preserved to where "anyone" can use it, even
people who have no idea HOW it does all that it does.

And yeah, humans wrote all of this.  Wonder of the world.  Right up
there with putting a man on the moon, far far beyond just building great
pyramids or the golden gate bridge.  Quite possibly (eventually) MORE
complex than the human genome itself, in purely information-theoretic
terms (and yes, to hold this discussion quantitatively we need to be
invoking Shannon when we talk about hierarchical encapsulation as an
"encoding" and information compression mechanism -- invoking "Moby Dick"
isn't nine characters long -- that is just algebraic shorthand for
something several MB long that contains cultural referents and words
that further encapsulate realities the baseline description of which are
very large indeed.

> As Spock said in the "everyone is Evil parallel-universe" episode, "I have
> friends, and some of them are LOGICIANS".
>
> Incidentally, I had been, before this thread, sceptical about 64 bit myself.
> The killer app for me was RGB's reminder that it is good to fit an integer
> in a register. I astonished myself that my computer-brain had always
> interpreted 64-bit as the addressable space, subsuming my numerology-brain
> which could care less. So thanks for kicking me out of that prejudice. Also,
> my mind is a bit expanded now about what all the register might like to
> consider as in it's addressable range; and that's certainly more than 32
> bit, altho in that sense 64 still seems kinda profligate to me.

It is.  It is really just 48 bits (basically order of 100 terabytes,
IIRC) currently, with transparent scalability up to 64 if and when
somebody comes up with terabyte DIMMs -- maybe in another decade, if
history is any guide.

Now is 48 bits profligate?  Sure.  For now.  But who wants to do "just"
34 bit or 40 bit addressing and then have to bump it out by a few more
bits every 2-3 years?  48 will hold us for maybe a decade.

I personally look forward to the day where one can hold the library of
congress (as straight text data) in operational memory of my then-laptop
and have several times that much memory to spare, along with the massive
hash tables and AI/VR/VR programs that extend the UI to a subcutaneous
embedded chip so that instead of typing I just think the text and it
happens, with the computer automagically fixing my tpyos and grammars'
mistake, with the computer looking up all references and crosslinking
them without being told so that every document I create is a hypertext
document, and so on.  Especially if that "laptop" is actually just one
function of my pen-sized PDD I keep in my pocket that runs on solar
power and hooks into the 10 Gbps wireless Internet.

This is a good link for people who want more real world details about
e.g. Opteron architecture:

   http://chip-architect.com/news/2003_09_21_Detailed_Architecture_of_AMDs_64bit_Core.html

Although Google from your browser, by invoking an immensely complex
"program" that extends literally over the entire world, can doubtless
find you ten thousand more references, some of them better.

     rgb


-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu





More information about the Beowulf mailing list