[Beowulf] partitioning L3 by page coloring

Rayson Ho raysonlogin at gmail.com
Mon Apr 8 14:38:15 PDT 2013


On Mon, Apr 8, 2013 at 4:17 PM, Brice Goglin <brice.goglin at gmail.com> wrote:
> /proc/<pid>/pagemap can give you some information about physical pages
> if I remember correctly.

The only thing is that page mapping can change - ie. the kernel can
swap a page from memory to disk, and then faults it back into physical
memory when accessed.

Unless, of course, we lock all the pages of each application running
on the same processor in memory.

Rayson

==================================================
Open Grid Scheduler - The Official Open Source Grid Engine
http://gridscheduler.sourceforge.net/


>
> Also, I wonder if huge pages could help enforcing the color. These pages
> are contigous in physical memory and well aligned, so you know a lot
> about the physical addresses in there. If cache coloring uses some of
> those known bits, it may be possible to manually allocate with a
> specific color within the huge page?
>
> Brice
>
>
>
>
>
> Le 08/04/2013 16:28, Rayson Ho a écrit :
>> I don't think it can be done if you are not changing the kernel page
>> allocator. Physical/virtual page mapping is all done by the kernel -
>> in the end, page faults are transparent in the userspace.
>>
>> Even the traditional page coloring needs help from the kernel. There's
>> the "Compiler-directed page coloring for multiprocessors" work done by
>> Todd C. Mowry (another U of Toronto prof), but I don't recall seeing
>> any pure userspace page coloring techniques - and I would imagine it
>> is not entirely possible as the userspace doesn't know the physical
>> page address.
>>
>> http://dl.acm.org/citation.cfm?id=237195
>>
>> Rayson
>>
>> ==================================================
>> Open Grid Scheduler - The Official Open Source Grid Engine
>> http://gridscheduler.sourceforge.net/
>>
>>
>>
>> On Mon, Apr 8, 2013 at 12:31 PM, Max R. Dechantsreiter
>> <max at performancejones.com> wrote:
>>> Rayson,
>>>
>>> ...In the paper you cited, I found the authors modified the
>>> Linux kernel page allocator: that approach seems far beyond
>>> what would be practical, or available to, a user not very
>>> sophisticated regarding kernel issues.  (Also, this must be
>>> hard to get correct!)
>>>
>>> I am not looking for the absolute best solution, if such
>>> exists; just a "quick and dirty" scheme I could use to test
>>> for benefit.
>>>
>>>
>>> Regards,
>>>
>>> Max
>>> ---
>>>
>>> On Mon, 8 Apr 2013, Rayson Ho wrote:
>>>
>>>> That technique was used in some of my U of Toronto friends' PhD thesis
>>>> research & projects:
>>>>
>>>> "Managing Shared L2 Caches on Multicore Systems in Software"
>>>>
>>>> "Reducing the Harmful Effects of Last-Level Cache Polluters with an
>>>> OS-Level, Software-Only Pollute Buffer"
>>>>
>>>> http://www.eecg.toronto.edu/~tamda/
>>>>
>>>> IIRC, all those techniques are OS only, with no changes to the CPU MMU
>>>> or cache mapping logic.
>>>>
>>>> Rayson
>>>>
>>>> ==================================================
>>>> Open Grid Scheduler - The Official Open Source Grid Engine
>>>> http://gridscheduler.sourceforge.net/
>>>>
>>>>
>>>>
>>>> On Sat, Apr 6, 2013 at 1:54 PM, Max R. Dechantsreiter
>>>> <max at performancejones.com> wrote:
>>>>>
>>>>> Would anyone with successful experience using this technique be willing
>>>>> to share details, and warn of pitfalls?
>>>>> _______________________________________________
>>>>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>>>>> To change your subscription (digest mode or unsubscribe) visit
>>>>> http://www.beowulf.org/mailman/listinfo/beowulf
>>>>
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
>



More information about the Beowulf mailing list