[Beowulf] Win64 Clusters!!!!!!!!!!!!

Robert G. Brown rgb at phy.duke.edu
Wed Apr 11 00:07:16 PDT 2007


On Sun, 8 Apr 2007, Jon Forrest wrote:

> The tests I made myself were non-HPC, e.g. building large software
> packages. But, I'm a reasonable person and I'll be glad to
> modify my statement to say that 64-bit computing is oversold
> in non-HPC markets. For example, when you look at pretty much
> any AMD-based computer these days, and compare it to what
> was available ~2 years ago (I'm not sure of the exact date), what
> difference do you see on the front panel? You'll see "AMD Athlon"
> in both cases, but now you also see "64". On the majority
> of computers being sold, this makes no difference. (HPC users
> are different). I bet most people think that since 64 is bigger
> than 32 then a 64-bit computer is "better". Yet, this isn't the

The primary difference is that a 64-bit computer is FASTER.  Measurably,
and on a remarkably wide range of applications.  True, most of them
don't matter much on a desktop, especially a desktop running nothing but
a web browser or email, but if the user is doing ANY task with a
significant CPU-bound computational component or a fairly wide class of
memory bound operations those tasks will complete faster, with less
"stress" on the CPU (improving multitasking and interactive latency by
virtue of completing those little chorelets faster.  Note well that on
really bleeding edge operating systems (e.g. Vista) the speed is
required just to make the base operating system and windowing
environment usable, according to what I hear.

The ability to address much larger memory segments is another major
difference, and it matters to many people as I pointed out before.  Just
because you personally have only rarely or even never observed a system
that has "filled up" available memory on a system with 4 GB installed
doesn't mean that it has never happened or cannot happen -- it happens
all the time in the HPC community.  Forget large lattices of spins --
think about a SMALL set of only (say) 100 two level atoms in a fully
quantum entangled state.  There are 2^100 degrees of freedom in such a
system.  An entire 64 bit memory space isn't ENOUGH to hold its
description.

You seem to be differentiating between 64 bits of address space just for
the actual programs and 64 bit architectures in general.  This doesn't
make much sense to me.  Modern computers more often than not use a flat
memory model (even where segmentation or virtualization of memory spaces
is supported) and just what any given piece of memory contains is up to
the operating system (and possibly BIOS) -- it isn't a feature of the
processor itself, with some memory labelled "data" and other memory
labelled "code" (unless the processor is a new, 64 bit processor that
lets you tag memory pages in precisely that way).  Program data is often
intertwined with code, and it is easy (at least in C) to write code that
will do horrible things to your program code by overwriting it.  This of
course is the origin of buffer-overwrite attacks and much hair pulling
over deep bugs.

So the assertion that "programs will never be written that use more than
32 bits of code address space" (an assertion that will almost certainly
be proven false by time, barring the landing of an asteroid or the like
that wipes out all of humankind in the next five or ten years) is not a
good argument against "64 bit processors".  There is a law out there
somewhere that says something like "programs expand to fill the
available space" on any given architecture, and I suspect that it is a
correct one.  Indeed, Windows Vista seems hell-bent on proving that
programs are currently expanding FASTER than the available (affordable)
space.

It also ignores the many performance and security benefits of e.g.

   * having wider registers

   * more registers

   * direct hardware support for e.g. 64 bit integers

   * a larger virtual memory space

   * a larger physical address space

   * no-execute bits to tag data memory vs code memory pages to prevent
     buffer overwrite attacks (see above).

   * the ability to perform certain kinds of code virtualization

all at a more or less constant cost.  This isn't about "marketing" at
all, as the Athlon 64 is dirt cheap at <$100 retail, $170 for dual core
-- compared to a whopping $50 for their regular (32 bit) processor.  I
see no massive margins there, given that the older processor has long
since played out its R&D costs and is doubtless end-of-lifed.  In as
little as one more year, we may see the last 32 bit processors go away
forever, like their 8 and 16 bit forefathers.

It is about performance and competitive market advantage.  The opteron
simply blows away any 32 bit processor on the market in ways that have a
direct impact on overall system performance (however ignorable those
improvements are for certain classes of task that already leave a system
"mostly idle").  So does the dirt-cheap AMD 64.

> case for them, especially if they're using a modern version of
> Windows, which is what the original posting was about. These days you

Modern as in Vista?  I though Vista was supposed to be trans-demanding
on hardware -- so much so that it is a complete pig on older hardware.
It openly calls for a 64 bit CPU in its hardware planning documentation,
although it grudgingly allows that it might, if it feels like it, run on
older 32 bit CPUs, if you don't care about not having a "premium
experience" (that is, its desktop interface won't work or will run like
treacle).

> also see "X2" which is a different kettle of fish and is, if anything,
> being undermarketed.
>
>>> add the additional difficulty of getting 64-bit drivers
>>> and what-not, I don't think it's worth messing with 64-bit
>>> computing for apps that don't need the address space.
>> 
>> Which OS are you using?  We haven't had 64 bit driver availability
>> issues since late 2004, for Linux.  For windows this may be different.
>
> I agree 100%. I should have been clearer.

I should note that modern 64-bit CPUs have been very deliberately
designed so that one doesn't ever "mess with 64-bit computing" at all,
unless you are a compiler designer or OS programmer or maybe in a few
cases a writer/maintainer of a library.  You boot the system.  It runs.
You load an application.  It runs.  You compile an application from
source.  It runs (or not, but it probably has nothing to do with the
nature of the CPU if it doesn't).  What exactly has to be "messed with"?
The 64 bit CPUs will execute 32 bit programs -- the hardest single thing
associated with making this happen transparently to the user is to
ensure that there are 32 bit (DL) libraries available to those
dynamically linked programs, requiring that a lot of libraries have to
be installed twice, once for 32 bit programs and once for 64 bit
programs.  But a user shouldn't have to know much about this.  Linux
does it nearly perfectly transparently, and so does (to my own
experience) Windows.  Even a programmer doesn't have to know much about
it -- to "port" my personal code from "32 bit" sources in an NFS mounted
directory to a 64 bit opteron the first time was something like

  cd programsrc
  make clean;
  make

and then run the program.  Whoa, it works.  So smoothly it is hard to
TELL that it is running as a 64 bit application, except by looking at
the libraries it links to and noting that it runs twice as fast.

>>> One additional way 64-bit computing is being oversold
>>> is that there aren't now, and maybe never will be, any
>>> human written program that requires more than 32 bits
>>> for the instruction segment of the program. It's simply
>> 
>> This is a bold assertion.  Sort of like the "no program will ever use
>> more than 640k of memory" made by a computing luminary many moons ago.
>
> Bill Gates says he never said that. In any case, most of that was
> due to the architectural inferiority of the x86 at the time.
> What I'm talking about is a real limit in the complexity of
> what a human, or group of humans, can create. Please name of
> piece of software, free or commercial, that needs more
> than a 32-bit address space for its instruction space.
> As far as I know, there isn't any such thing. Not even close.

And you may be correct.  Or not -- recall not all code is written by
humans, and programs writing programs may well have exceeded your limits
already in some computer science or mathematics department.  Also
remember that groups of humans have systematically filled every
currently available amount of memory within a matter of years for
decades now, so regularly that one can probably predict fairly
accurately the DATE where this will not be true by simply extrapolating
a regular curve.  Finally, remember that complexity is a funny thing --
there are whole CLASSES of application that have yet to be written,
including programs that e.g. explore various branches of mathematics via
exhaustive search or do voice/pattern recognition that may well take a
lot more lines of code than you currently anticipate.

But I won't argue, as I don't have a specific counterexample and don't
feel like writing a piece of perl that could generate C code in a matter
of minutes that would, if compiled, exceed a 32 bit CPU's capacity.  For
one thing, I personally don't have an 8 GB 64 bit system to test the
code on.  For another, that doesn't matter because that's not the point
of 64 bit CPUs.  The point is to let computers operate on much larger
memory spaces, regardless of whether the contents of that memory is
code, heap, stack, data, cache, buffer, or any mix of the above being
overwritten with something unexpected by an errant pointer.  The point
is to move data around in bigger chunks, faster.  The point is to
perform all sorts of arithmetical and logical operations on 64 bit
operands faster.  The point is to go faster, at what is ultimately
constant cost to the consumer, just like all the other Moore's law
advances over the last forty or so years.

> If so, I don't see it. If my statement is true, that is that
> no human written software program will ever outgrow
> a 32-bit address space **for text** then that says something
> important about what's important and what's not important.
> Note that I'm not saying that a modern processor should
> be some kind of hybrid mutant with 32-bit text pointers
> and 64-bit data pointers. That would be ridiculous.

Given the conditional, this of course is impossible to argue with;-)
Either point.

If your statement is true (and conceding that it might still BE true, or
nearly so) that truth would be interesting information.  Either way, a
"hybrid mutant" isn't ridiculous -- it is a standard feature in modern
CPU design already.  The Opteron, for example, can only address 48 bits
worth of memory, not 64, because nobody can imagine how to put EVEN ~2.5
x 10^14 bytes of memory on a single CPU's memory pathway at the moment.
This is just as "ridiculous" for data as 32 bits is for code -- more so,
actually, as it INCLUDES code and data symmetrically in a flat memory
model.  It is just better to engineer for far more than we'll need for a
decade or so and then grow into our hundred terabyte systems with room
to spare rather than spend a lot of money on a processor design only to
have to change it in two or three years because "nobody could imagine"
how a program could need more than (fill in the blank with any resource
assertion of the past)...;-)

    rgb

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu





More information about the Beowulf mailing list