[Beowulf] single machine with 500 GB of RAM
Mark Hahn
hahn at mcmaster.ca
Wed Jan 9 21:04:20 PST 2013
> procs. Within each process the accesses to their "cube" of data were
> near to completely random.
"completely random" is a bit like von Neumann's "state of sin" ;)
if they managed to make actually uniform random accesses, they'd have
discovered a new PRNG, possibly the most conpute-intensive known!
my guess is that apps that claim to be random/seeky often have
pretty non-uniform patterns. they obviously have a working set,
and the question is really how sharp the knee is in that curve.
when the cache-miss penalty is large, such as ram vs casually-configured
swap on a normal disks. what if there's a 1000x difference in how often
used are the the hottest blcoks vs cool onnes?
let's eyeball a typical memory latency at 50 ns, a mediocre disk at 10 ms,
but the real news here is that completely mundane SSD latency is 130 us.
200,000x slower is why thrashing is painful - 2600x slower than ram is
not something you can ignore, but it's not crazy.
it's a curious coincidence that a farm of Gb servers could provide
random 4k blocks at a latency similar to the SSD (say 150 us).
of course, ScaleMP is, abstractly, based on this idea (over IB.)
> IOPs, but is there anywhere a scaling study on them where individual
> requests latencies are measured and CDF'd? That would be really
http://markhahn.ca/ssd.png
like this? those are random 4k reads, uniformly distributed.
regards, mark.
More information about the Beowulf
mailing list