[Beowulf] big read triggers migration and slow memory IO?
mathog
mathog at caltech.edu
Fri Jul 10 12:23:25 PDT 2015
On 10-Jul-2015 12:00, Christopher Samuel quoted:
> A single compaction run involves a migration scanner and a free
> scanner.
> Both scanners operate on pageblock-sized areas in the zone. The
> migration
> scanner starts at the bottom of the zone and searches for all movable
> pages within each area, isolating them onto a private list called
> migratelist. The free scanner starts at the top of the zone and
> searches
> for suitable areas and consumes the free pages within making them
> available for the migration scanner. The pages isolated for migration
> are
> then migrated to the newly isolated free pages.
I wonder if they might not be developing this on a system with a
relatively small amount of memory per zone, like 16 or 32GB, and the
method just doesn't scale well to 262GB/node.
Also the compaction would seem to be a lost cause at the times it was
running on my system, because there were very few free pages. (What it
needed to do, but wasn't, was throw pages out of file cache.) I'm
thinking that under those conditions compation would work about as well
as a disk defragmentation on a hard drive that is 99% full.
Regards,
David Mathog
mathog at caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
More information about the Beowulf
mailing list