When hugepages is set vm.max_map_count is not automatically adjusted
Bug #1507921 reported by
Liam Young
This bug affects 3 people
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
falkor |
Fix Released
|
High
|
Chris Glass | ||
dpdk (Ubuntu) |
Fix Released
|
Medium
|
Unassigned | ||
nova-compute (Juju Charms Collection) |
Fix Released
|
High
|
Liam Young | ||
openvswitch-dpdk (Ubuntu) |
Won't Fix
|
Undecided
|
Unassigned |
Bug Description
When hugepages is set the kernel parameter vm.max_map_count should be a minimum of 2 * vm.nr_hugepages but it is currently not dynamically increased.
This minimum seems to come form https:/
"While most applications need less than a thousand maps, certain
programs, particularly malloc debuggers, may consume lots of them,
e.g., up to one or two maps per allocation."
Related branches
lp://staging/~gnuoy/charm-helpers/max_map_count
- James Page: Approve
-
Diff: 33 lines (+15/-0)2 files modifiedcharmhelpers/core/hugepage.py (+2/-0)
tests/core/test_hugepage.py (+13/-0)
lp://staging/~gnuoy/charms/trusty/nova-compute/chsync-stable
Merged
into
lp://staging/~openstack-charmers-archive/charms/trusty/nova-compute/trunk
at
revision 141
- Marco Ceppi (community): Approve
- David Ames (community): Approve
-
Diff: 546 lines (+245/-26)8 files modifiedhooks/charmhelpers/contrib/openstack/amulet/deployment.py (+100/-1)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+25/-3)
hooks/charmhelpers/contrib/openstack/context.py (+10/-9)
hooks/charmhelpers/contrib/openstack/utils.py (+4/-1)
hooks/charmhelpers/core/host.py (+12/-1)
hooks/charmhelpers/core/hugepage.py (+2/-0)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+67/-8)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+25/-3)
lp://staging/~gnuoy/charms/trusty/nova-compute/chsync-next
Merged
into
lp://staging/~openstack-charmers-archive/charms/trusty/nova-compute/next
at
revision 177
- David Ames (community): Approve
-
Diff: 546 lines (+245/-26)8 files modifiedhooks/charmhelpers/contrib/openstack/amulet/deployment.py (+100/-1)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+25/-3)
hooks/charmhelpers/contrib/openstack/context.py (+10/-9)
hooks/charmhelpers/contrib/openstack/utils.py (+4/-1)
hooks/charmhelpers/core/host.py (+12/-1)
hooks/charmhelpers/core/hugepage.py (+2/-0)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+67/-8)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+25/-3)
Changed in nova-compute (Juju Charms Collection): | |
status: | New → In Progress |
importance: | Undecided → High |
assignee: | nobody → Liam Young (gnuoy) |
description: | updated |
Changed in nova-compute (Juju Charms Collection): | |
status: | In Progress → Fix Released |
milestone: | none → 15.10 |
Changed in falkor: | |
milestone: | none → 0.13 |
assignee: | nobody → Chris Glass (tribaal) |
importance: | Undecided → High |
status: | New → Confirmed |
Changed in falkor: | |
status: | Confirmed → Fix Committed |
Changed in falkor: | |
status: | Fix Committed → Fix Released |
Changed in openvswitch-dpdk (Ubuntu): | |
status: | Confirmed → Invalid |
status: | Invalid → Won't Fix |
To post a comment you must log in.
For openvswitch-dpdk, vm.max_map_count should be adjusted at least for 2*nr_hugepages + some padding for other apps, e.g.:
max_ map_count= "$(awk -v padding=65530 '{total+ =$1}END{ print total*2+padding}' /sys/devices/ system/ node/node* /hugepages/ hugepages- */nr_hugepages) " map_count= ${max_map_ count:- 65530}
sysctl -q vm.max_