[Beowulf] NUMA zone weirdness

John Hearns hearnsj at googlemail.com
Fri Dec 16 07:52:34 PST 2016


Problem solved.
I have changed the QPI Snoop Mode on these servers from
ClusterOnDIe Enabled to Disabled and they display what I take to be correct
behaviour - ie

[root at comp006 ~]# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11
node 0 size: 32673 MB
node 0 free: 31541 MB
node 1 cpus: 12 13 14 15 16 17 18 19 20 21 22 23
node 1 size: 32768 MB
node 1 free: 31860 MB
node distances:
node   0   1
  0:  10  21
  1:  21  10


On 16 December 2016 at 14:51, John Hearns <hearnsj at googlemail.com> wrote:

> hwloc is finding weirdness also.
> I am going to find I have done something stupid, right?
>
>
>
> [johnh at comp006 ~]$ lstopo
> ************************************************************
> ****************
> * hwloc 1.11.3 has encountered what looks like an error from the operating
> system.
> *
> * Package (P#1 cpuset 0x00fff000) intersects with NUMANode (P#0 cpuset
> 0x00fc0fff) without inclusion!
> * Error occurred in topology.c line 1046
> *
> * The following FAQ entry in the hwloc documentation may help:
> *   What should I do when hwloc reports "operating system" warnings?
> * Otherwise please report this error message to the hwloc user's mailing
> list,
> * along with the output+tarball generated by the hwloc-gather-topology
> script.
> ************************************************************
> ****************
> Machine (64GB total)
>   NUMANode L#0 (P#0 32GB)
>     Package L#0
>       L3 L#0 (15MB)
>         L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU
> L#0 (P#0)
>         L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU
> L#1 (P#1)
>         L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2 + PU
> L#2 (P#2)
>         L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3 + PU
> L#3 (P#3)
>         L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4 + PU
> L#4 (P#4)
>         L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5 + PU
> L#5 (P#5)
>       L3 L#1 (15MB)
>         L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6 + PU
> L#6 (P#6)
>         L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7 + PU
> L#7 (P#7)
>         L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8 + PU
> L#8 (P#8)
>         L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9 + PU
> L#9 (P#9)
>         L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10 +
> PU L#10 (P#10)
>         L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11 +
> PU L#11 (P#11)
>     L3 L#2 (15MB)
>       L2 L#12 (256KB) + L1d L#12 (32KB) + L1i L#12 (32KB) + Core L#12 + PU
> L#12 (P#18)
>       L2 L#13 (256KB) + L1d L#13 (32KB) + L1i L#13 (32KB) + Core L#13 + PU
> L#13 (P#19)
>       L2 L#14 (256KB) + L1d L#14 (32KB) + L1i L#14 (32KB) + Core L#14 + PU
> L#14 (P#20)
>       L2 L#15 (256KB) + L1d L#15 (32KB) + L1i L#15 (32KB) + Core L#15 + PU
> L#15 (P#21)
>       L2 L#16 (256KB) + L1d L#16 (32KB) + L1i L#16 (32KB) + Core L#16 + PU
> L#16 (P#22)
>       L2 L#17 (256KB) + L1d L#17 (32KB) + L1i L#17 (32KB) + Core L#17 + PU
> L#17 (P#23)
>     HostBridge L#0
>       PCIBridge
>         PCI 8086:24f0
>       PCI 8086:8d62
>       PCIBridge
>         PCI 102b:0522
>           GPU L#0 "card0"
>           GPU L#1 "controlD64"
>       PCIBridge
>         PCI 8086:1521
>           Net L#2 "enp7s0f0"
>         PCI 8086:1521
>           Net L#3 "enp7s0f1"
>       PCI 8086:8d02
>         Block(Disk) L#4 "sda"
>   NUMANode L#1 (P#2 32GB) + L3 L#3 (15MB)
>     L2 L#18 (256KB) + L1d L#18 (32KB) + L1i L#18 (32KB) + Core L#18 + PU
> L#18 (P#12)
>     L2 L#19 (256KB) + L1d L#19 (32KB) + L1i L#19 (32KB) + Core L#19 + PU
> L#19 (P#13)
>     L2 L#20 (256KB) + L1d L#20 (32KB) + L1i L#20 (32KB) + Core L#20 + PU
> L#20 (P#14)
>     L2 L#21 (256KB) + L1d L#21 (32KB) + L1i L#21 (32KB) + Core L#21 + PU
> L#21 (P#15)
>     L2 L#22 (256KB) + L1d L#22 (32KB) + L1i L#22 (32KB) + Core L#22 + PU
> L#22 (P#16)
>     L2 L#23 (256KB) + L1d L#23 (32KB) + L1i L#23 (32KB) + Core L#23 + PU
> L#23 (P#17)
>
>
> On 16 December 2016 at 14:36, John Hearns <hearnsj at googlemail.com> wrote:
>
>> This is in the context of Ominpath cards and the hfi1 driver.
>> In the file pio.c there is a check on the NUMA zones being online
>>
>>
>>
>> *      num_numa = num_online_nodes
>> <http://lxr.free-electrons.com/ident?v=4.4;i=num_online_nodes>();*
>>
>> *1711*
>> <http://lxr.free-electrons.com/source/drivers/staging/rdma/hfi1/pio.c?v=4.4#L1711>*
>> /* enforce the expectation that the numas are compact */*
>>
>> *1712*
>> <http://lxr.free-electrons.com/source/drivers/staging/rdma/hfi1/pio.c?v=4.4#L1712>*
>> for (i <http://lxr.free-electrons.com/ident?v=4.4;i=i> = 0; i
>> <http://lxr.free-electrons.com/ident?v=4.4;i=i> < num_numa; i
>> <http://lxr.free-electrons.com/ident?v=4.4;i=i>++) {*
>>
>> *1713*
>> <http://lxr.free-electrons.com/source/drivers/staging/rdma/hfi1/pio.c?v=4.4#L1713>*
>> if (!node_online
>> <http://lxr.free-electrons.com/ident?v=4.4;i=node_online>(i
>> <http://lxr.free-electrons.com/ident?v=4.4;i=i>)) {*
>>
>> *1714*
>> <http://lxr.free-electrons.com/source/drivers/staging/rdma/hfi1/pio.c?v=4.4#L1714>*
>> dd_dev_err <http://lxr.free-electrons.com/ident?v=4.4;i=dd_dev_err>(dd
>> <http://lxr.free-electrons.com/ident?v=4.4;i=dd>, "NUMA nodes are not
>> compact\n");*
>>
>> *1715*
>> <http://lxr.free-electrons.com/source/drivers/staging/rdma/hfi1/pio.c?v=4.4#L1715>*
>> ret <http://lxr.free-electrons.com/ident?v=4.4;i=ret> = -EINVAL
>> <http://lxr.free-electrons.com/ident?v=4.4;i=EINVAL>;*
>>
>> *1716*
>> <http://lxr.free-electrons.com/source/drivers/staging/rdma/hfi1/pio.c?v=4.4#L1716>*
>> goto done <http://lxr.free-electrons.com/ident?v=4.4;i=done>;*
>>
>> *1717*
>> <http://lxr.free-electrons.com/source/drivers/staging/rdma/hfi1/pio.c?v=4.4#L1717>*
>> }*
>>
>> *1718*
>> <http://lxr.free-electrons.com/source/drivers/staging/rdma/hfi1/pio.c?v=4.4#L1718>*
>> }*
>>
>>
>>
>>
>> On some servers I have I see this weirdness with the NUMA zones:
>>
>> (2650-v4 processors, HT is off)
>>
>> [root at comp006 ~]# numactl --hardware
>>
>> available: 2 nodes (0,2)
>>
>> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 18 19 20 21 22 23
>>
>> node 0 size: 32673 MB
>>
>> node 0 free: 29840 MB
>>
>> node 2 cpus: 12 13 14 15 16 17
>>
>> node 2 size: 32768 MB
>>
>> node 2 free: 31753 MB
>>
>> node distances:
>>
>> node   0   2
>>
>>   0:  10  20
>>
>>   2:  20  10
>>
>>
>>
>> Someone will be along in a minute to explain why.
>>
>> I am sure this is a BISO Setting, but which oen is not makign itself
>> clear to me.
>>
>>
>>
>>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20161216/5bb0d337/attachment.html>


More information about the Beowulf mailing list