[Beowulf] Moores Law is dying

Jon Forrest jlforrest at berkeley.edu
Tue Apr 14 13:27:52 PDT 2009

Robert G. Brown wrote:

> Are you
> suggesting that e.g. a long-running program with fully unrolled loops
> cannot exceed 4 GB in size and still be "simple"?

Unrolled loops probably add only a few percent to the text size
of a program. I admittedly don't have any data to prove this
but try to imagine a case in which a standard compiler would
do enough loop unrolling to add significant size to a program.
As I understand it, loop unrolling is only used in certain cases,
and the size of the unrolled loops themselves can't be too large,
otherwise any benefits from the unrolling evaporate.

> Are you suggesting
> that compilers will never try to unroll code at that level, even when
> enormous memory systems are commonplace?

Again, the enormous memory systems you mention consist mostly
of enormous amounts of data, not text.

> Are you suggesting that even
> when concatenated, the space of all possibly functional operational
> phonemes in computational semantics cannot fill a 4 GB dictionary?

I'm not sure what you mean by "functional operational
phonemes" but to me that means some sort of data, which
again, is not what I'm talking about.

> Another such program is "the operating system" especially a multitasking
> operating system.  There is no real bound on the number of threads an
> operating system can run,

True, but somebody still has to write the threads.

> and "the program" being run on a multitasking
> operating system is the union of all "sub" programs being run on the
> system, with or without shared libraries (sharing is expensive in
> performance, remember -- we do it to save memory because it is a scarce
> resource).

Why is sharing expensive in performance? It might take a little
overhead to setup and manage, but why is having multiple virtual
addresses map to the same physical memory expensive?

> Clearly that can and does exceed 4 GB, even routinely on a
> heavily loaded server and we'd do it a lot more often without shared
> libraries.

Really? Show me one case where this is true. Again, remember, I'm
only talking about program text.

> And there MAY be new compilers that are a lot more
> generous in their usage of space than they are now.  There may be
> new-gen RISC-y processors that use far more instructions to do things
> that are currently done with fewer ones.  Is your observation
> Intel-arch-only?

I did my test for both Alpha OSF/1 a while back, and modern
Intel x86.

Jon Forrest
Research Computing Support
College of Chemistry
173 Tan Hall
University of California Berkeley
Berkeley, CA
jlforrest at berkeley.edu

More information about the Beowulf mailing list