[Beowulf] Pony: not yours.

Lux, Jim (337C) james.p.lux at jpl.nasa.gov
Sat May 18 16:45:57 PDT 2013

On 5/17/13 7:16 AM, "Ellis H. Wilson III" <ellis at cse.psu.edu> wrote:

>On 05/17/2013 10:01 AM, Joe Landman wrote:
>> That said, putting 192 cores in 48 compute nodes, along with 1/4 PB of
>> storage in a 4U rack mount container is pretty darned awesome.  And the
>> CPUs will get faster and more efficient over time, so the HPC comment
>> likely has an expiration date on it.
>> Add to this, they run Ubuntu, Debian, and Fedora.  Easy tool stack, most
>> stuff just works.  FP heavy and memory intensive code ... not so well.
>> Integer heavy code, pretty well.
>Playing devil's advocate here, but am honestly interested to know:
>Isn't there a tacit expectation that if you are moving tons of data
>really fast, it will be "memory intensive"?  Are you not moving that
>data from disk into memory first, and then doing a bit (or a lot) of
>work on it and dumping it back out to disk or dumping it completely?
>Maybe I'm screwing up the definition of "memory intensive" though...
>(Does memory-bound == memory-intensive?  I think no, but could be wrong.)

Memory intensive could mean "lots of memory access to a large memory
space" perhaps without a structure or pattern.

An FFT or linear algebra on non-sparse matrices have a lot of structure in
their memory accesses, and they may hit the memory a lot. Clever cacheing
or register use can make a huge difference in algorithms like FFTs,
particularly for short transforms where all the data fit in cache.


More information about the Beowulf mailing list