[Beowulf] partitioning L3 by page coloring

Josef Weidendorfer Josef.Weidendorfer at in.tum.de
Mon Apr 8 14:58:18 PDT 2013


Am 08.04.2013 22:17, schrieb Brice Goglin:
> Also, I wonder if huge pages could help enforcing the color. These pages
> are contigous in physical memory and well aligned, so you know a lot
> about the physical addresses in there. If cache coloring uses some of
> those known bits, it may be possible to manually allocate with a
> specific color within the huge page?

Yes, that should work, but it's tricky.

Let's choose a coloring which needs bit 14 of every used address set to
0 (ie. contingous ranges of 16kB). You can provide your own malloc
(using mmap). However, you have to make sure that any allocation from 
your application is smaller than 16kB.

It would not really work for code and static data, but that probably
does not matter much.

Josef

>
> Brice
>
>
>
>
>
> Le 08/04/2013 16:28, Rayson Ho a écrit :
>> I don't think it can be done if you are not changing the kernel page
>> allocator. Physical/virtual page mapping is all done by the kernel -
>> in the end, page faults are transparent in the userspace.
>>
>> Even the traditional page coloring needs help from the kernel. There's
>> the "Compiler-directed page coloring for multiprocessors" work done by
>> Todd C. Mowry (another U of Toronto prof), but I don't recall seeing
>> any pure userspace page coloring techniques - and I would imagine it
>> is not entirely possible as the userspace doesn't know the physical
>> page address.
>>
>> http://dl.acm.org/citation.cfm?id=237195
>>
>> Rayson
>>
>> ==================================================
>> Open Grid Scheduler - The Official Open Source Grid Engine
>> http://gridscheduler.sourceforge.net/
>>
>>
>>
>> On Mon, Apr 8, 2013 at 12:31 PM, Max R. Dechantsreiter
>> <max at performancejones.com> wrote:
>>> Rayson,
>>>
>>> ...In the paper you cited, I found the authors modified the
>>> Linux kernel page allocator: that approach seems far beyond
>>> what would be practical, or available to, a user not very
>>> sophisticated regarding kernel issues.  (Also, this must be
>>> hard to get correct!)
>>>
>>> I am not looking for the absolute best solution, if such
>>> exists; just a "quick and dirty" scheme I could use to test
>>> for benefit.
>>>
>>>
>>> Regards,
>>>
>>> Max
>>> ---
>>>
>>> On Mon, 8 Apr 2013, Rayson Ho wrote:
>>>
>>>> That technique was used in some of my U of Toronto friends' PhD thesis
>>>> research & projects:
>>>>
>>>> "Managing Shared L2 Caches on Multicore Systems in Software"
>>>>
>>>> "Reducing the Harmful Effects of Last-Level Cache Polluters with an
>>>> OS-Level, Software-Only Pollute Buffer"
>>>>
>>>> http://www.eecg.toronto.edu/~tamda/
>>>>
>>>> IIRC, all those techniques are OS only, with no changes to the CPU MMU
>>>> or cache mapping logic.
>>>>
>>>> Rayson
>>>>
>>>> ==================================================
>>>> Open Grid Scheduler - The Official Open Source Grid Engine
>>>> http://gridscheduler.sourceforge.net/
>>>>
>>>>
>>>>
>>>> On Sat, Apr 6, 2013 at 1:54 PM, Max R. Dechantsreiter
>>>> <max at performancejones.com> wrote:
>>>>>
>>>>> Would anyone with successful experience using this technique be willing
>>>>> to share details, and warn of pitfalls?
>>>>> _______________________________________________
>>>>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>>>>> To change your subscription (digest mode or unsubscribe) visit
>>>>> http://www.beowulf.org/mailman/listinfo/beowulf
>>>>
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
>


-- 
--
Dr. Josef Weidendorfer, Informatik, Technische Universität München
TUM I-10 - FMI 01.06.055 - Tel. 089 / 289-18454



More information about the Beowulf mailing list