[Beowulf] memory bandwidth scaling
Jason Riedy
jason at lovesgoodfood.com
Thu Oct 8 12:02:43 PDT 2015
And mathog at caltech.edu writes:
> Lately I have been working on a system with >512Gb of RAM and a
> lot of processors. This wouldn't be at all a cost effective
> beowulf node, [...]
Depends on the actual machine cost and your applications' needs.
I've seen 1TiB, 4 CPU machines with a cost less than two 256GiB,
2 CPU machines.
And the savings in programming time shouldn't be overlooked...
> This machine is also prone to locking up (to the point it doesn't
> answer terminal keystrokes from a remote X11 terminal) when writing
> huge files back to disk. I have not tracked this one down yet, it
> seems to be related to unmapping a memory mapped 10.5 Gb file.
Besides finding the right cache-sizing knobs, you could copy the
files to a RAM disk, mmap those, and copy them back (if it fits
your use). Hasn't caused any freezing behavior for me on larger
machines and files, and seems to use the RAM disk version
directly rather than making a second copy.
More information about the Beowulf
mailing list