I can see that the resident memory drops for ovs-vswitchd:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
696 root 10 -10 734.8m 8.3m 4.9m S 0.3 0.0 0:00.37 ovs-vswitchd
rss_diff = 20.3 - 8.3 = 12 MiB
Given that I reduced the number of cores exposed to the container (and vswitchd) by 40 the following can be used to estimate the RSS memory added per core:
So a rough calculation shows that each core adds an additional 1.5 MiB to RSS when OVS is idling.
--------------------------------
Based on that, I can summarize that the issue may appear:
1) Depending on the amount of cores available to the system;
2) Depending on which systemd version is used: the one with the 64 MiB RLIMIT_MEMLOCK used by default will be less prone to that until more memory is put into the RSS by vswitchd (new allocations or more cores).
4. The number of OVS threads and memory usage
On my test host there are 48 HT cores seen by the system and the LXD container:
lscpu | grep On-line
On-line CPU(s) list: 0-47
I have changed ovs-vswitchd start command to include `--no-mlockall` and it started successfully
ovs-ctl --no-mlockall --no-ovsdb-server --no-monitor start
Which allowed me to look at the actual number of threads it creates (50):
https:/ /paste. ubuntu. com/p/GbF4DPxz9 q/
root@right-imp:~# pstree -p 738 | wc -l
50
root@right-imp:~# pstree -p `pgrep -f ovs-vswitchd` | wc -l 738)─┬─ {ovs-vswitchd} (740)
├─{ ovs-vswitchd} (741)
├─{ ovs-vswitchd} (742)
├─{ ovs-vswitchd} (785)
├─{ ovs-vswitchd} (786)
├─{ ovs-vswitchd} (787)
├─{ ovs-vswitchd} (788)
└─{ ovs-vswitchd} (958)
ovs-vswitchd(
# ...
`top` shows that the virtual address space for ovs-vswitchd is ~ 3.5Gib while the resident memory is around 20 MiB:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
738 root 10 -10 3615.0m 20.3m 4.8m S 2.3 0.0 0:22.52 ovs-vswitchd
When I reduce the amount of cores allocated to the container
# lxc config set right-imp limits.cpu 8 12,20,24, 38,44
root@right-imp:~# lscpu | grep On-line
On-line CPU(s) list: 1,6,10,
pstree -p `pgrep -f ovs-vswitchd` | wc -l
10
I can see that the resident memory drops for ovs-vswitchd:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
696 root 10 -10 734.8m 8.3m 4.9m S 0.3 0.0 0:00.37 ovs-vswitchd
rss_diff = 20.3 - 8.3 = 12 MiB
Given that I reduced the number of cores exposed to the container (and vswitchd) by 40 the following can be used to estimate the RSS memory added per core:
rough_rss_ per_thread = rss_diff / core_diff = 12 MiB / (48 - 40) = 1.5 MiB
So a rough calculation shows that each core adds an additional 1.5 MiB to RSS when OVS is idling.
------- ------- ------- ------- ----
Based on that, I can summarize that the issue may appear:
1) Depending on the amount of cores available to the system;
2) Depending on which systemd version is used: the one with the 64 MiB RLIMIT_MEMLOCK used by default will be less prone to that until more memory is put into the RSS by vswitchd (new allocations or more cores).