[Beowulf] Moores Law is dying
Peter St. John
peter.st.john at gmail.com
Tue Apr 14 14:31:35 PDT 2009
The size of the final source code for a program is not really relevant to
human comprehension, because we chunk it. FORTRAN makes it possible for the
engineer to comprehend a program that would be inaccessible to him as the
equivalent (very long) list of Assembler statemtents; how far would you get
with the IMSL if you had to inline every byte at the level of loading
registers? And IDEs generate much more code these days than anyone wants to
read. When you use Visual Basic to whip togehter a UI, which is fun and
easy, it generates event handler hooks that nobody, really nobody, wants to
During the year that WorldCom tanked I was solely responsible for
maintaining a million lines of legacy stuff. The produce of a team of Ivy
PhD's using automation with proprietary libaries. There were hundreds of
makefiles and build scripts; just compiling the project was almost too big
to comprehend, much less the project itself. Yet one million lines is much
smaller than 2GB :-) But it was comprensible as chunks, as modules, like you
comprehend a map of Europe divided into countries, not blades of grass.
Searching that for memory leaks was a bear though I can tell you.
Also, a single (recursive) line of code can be incomprehensible in a strong
sense. Consider the short defintiion of Mandelbrot sets. It might take a
year of Complex Analysis to comprehend the short definition :-) but nobody
can comprehend the result, in a sense. You can't predict what the resulting
image will look like, other than that it will be pretty, confusing, and
self-similar. Recursion is magical.
I just don't think there is anything special about 32-bit addressing.
CAD/CAM type methods, applied to software itself; the von Neuman architecure
of the code segment being equivalent to the data segment; not to mention the
potential of AI regenerating itself; makes any specific limit of code size
difficult to imagine, for me.
That said, I can imagine it will be /would be/ maybe already is somewhere,
cool for a many-core design to be hierarchical; if you have a zillion cores
on a chip, imagine 8 8-bit ALUs along side one 64-bit CPU, for vector
processing of small integers, and each might quickly address it's own little
chunk of 256 bytes of cache.
On 4/14/09, Jon Forrest <jlforrest at berkeley.edu> wrote:
> Joe Landman wrote:
> We know of (and have worked with) many applications that have required
>> tremendous memory footprint. One that required hundreds of GB of ram in the
>> late 90s might use a bit more today.
> I claim that there's a memory-related constant that hasn't been
> widely recognized. This is that the amount of address space for
> a program's text segment will never exceed 32 bits. Note that
> I am *not* talking about the data segment.
> The reason for this is that it's simply too hard to write
> a program whose instructions require even close to the
> 32 bit address space. Such a program would be too complex
> to understand, assuming it's written by humans. Maybe
> such a program could be generated by a program, but
> I'm not talking about this.
> I once added up the text segment of every executable
> and shared library on a Linux system. I probably counted
> some files more than once. Even so, the total text size
> of all these files was less than 2GB.
> I'm not proposing doing anything about this, such
> as coming out with an architecture that uses
> 32-bit text pointers and 64-bit data pointers.
> That would add needless complexity. But, it's important
> to realize that this limit exists, and unless
> we get much smarter, isn't likely to go away.
> Jon Forrest
> Research Computing Support
> College of Chemistry
> 173 Tan Hall
> University of California Berkeley
> Berkeley, CA
> jlforrest at berkeley.edu
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Beowulf