Simultaneous allocation of 1Gb and 2 M HugePages break VMs launching
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Fix Committed
|
High
|
Sergey Kolekonov | ||
Mitaka |
Fix Released
|
High
|
Ivan Berezovskiy | ||
Newton |
Fix Committed
|
High
|
Sergey Kolekonov |
Bug Description
ISO 9.0 150 - version http://
Steps to reproduce:
Prepare 2+1 nodes
[root@fuel ~]# fuel node --env 2
id | status | name | cluster | ip | mac | roles | pending_roles | online | group_id
---|---
5 | ready | Untitled (58:06) | 2 | 10.20.0.6 | 0c:c4:7a:6c:58:06 | cinder, compute | | True | 2
2 | ready | Untitled (55:2c) | 2 | 10.20.0.5 | 0c:c4:7a:34:55:2c | cinder, compute | | True | 2
6 | ready | Untitled (53:8e) | 2 | 10.20.0.4 | 0c:c4:7a:34:53:8e | controller | | True | 2
There are 4 NUMA nodes on compute node-2. Allocate Nova Huge pages: 2.0 MB - 2030 and 1.0 GB - 10
Deploy env
Verify HP are present on node-2
Create an aggregate for compute nodes with huge pages and CPU pinning:
nova aggregate-create performance_3_cpu
nova aggregate-
nova aggregate-
Add hosts to them:
nova aggregate-add-host performance_3_cpu node-2.domain.tld
Create new flavors for instances with hugepages for each HP on the hosts:
nova flavor-create h1.huge.hpgs auto 512 1 1
nova flavor-create h1.small.hpgs auto 1024 1 1
nova flavor-key h1.huge.hpgs set hw:mem_
nova flavor-key h1.huge.hpgs set aggregate_
nova flavor-key h1.small.hpgs set hw:mem_
nova flavor-key h1.small.hpgs set aggregate_
Add to the flavors requirement for the CPU pinning:
nova flavor-key h1.huge.hpgs set hw:cpu_
nova flavor-key h1.huge.hpgs set aggregate_
nova flavor-key h1.small.hpgs set hw:cpu_
nova flavor-key h1.small.hpgs set aggregate_
Create instance with 2M flavor HP size
Delete instace
Create instance with 1Gb flavor size
Expected result:
VMs are created
Actual result:
Vm with 1Gb is in Error state with "No valid host" error.
From libvirt log (node-2): qemuBuildNumaAr
No mount point for 1Gb HugePages are present
summary: |
- Simultaneously allocation 1Gb and 2 M HugePages prevent launching VM + Simultaneous allocation of 1Gb and 2 M HugePages break VMs launching |
description: | updated |
Changed in fuel: | |
status: | New → Confirmed |
tags: | added: team-telco |
Changed in fuel: | |
assignee: | Fuel Telco (fuel-telco-team) → Sergey Kolekonov (skolekonov) |
From node-2:
root@node-2:~# mount os-root on / type ext4 (rw,errors=panic) nosuid, nodev) nosuid, nodev) fuse/connection s type fusectl (rw) security type securityfs (rw) nosuid, gid=5,mode= 0620) nosuid, size=10% ,mode=0755) nosuid, nodev,size= 5242880) nosuid, nodev,size= 104857600, mode=0755) vm-nova on /var/lib/nova type xfs (rw) cgroup/ systemd type cgroup (rw,noexec, nosuid, nodev,none, name=systemd) 775,gid= 114)
/dev/mapper/
proc on /proc type proc (rw,noexec,
sysfs on /sys type sysfs (rw,noexec,
none on /sys/fs/cgroup type tmpfs (rw)
none on /sys/fs/
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,
tmpfs on /run type tmpfs (rw,noexec,
none on /run/lock type tmpfs (rw,noexec,
none on /run/shm type tmpfs (rw,nosuid,nodev)
none on /run/user type tmpfs (rw,noexec,
none on /sys/fs/pstore type pstore (rw)
/dev/sda3 on /boot type ext2 (rw)
/dev/mapper/
systemd on /sys/fs/
hugetlbfs-kvm on /run/hugepages/kvm type hugetlbfs (rw,mode=
none on /sys/kernel/config type configfs (rw)
Libvirt configuration file /etc/libvirt/ qemu.com contains line:
# If provided by the host and a hugetlbfs mount point is configured, hugepages2M" , "/dev/hugepages1G"] libvirt/ qemu
# a guest may request huge page backing. When this mount point is
# unspecified here, determination of a host mount point in /proc/mounts
# will be attempted. Specifying an explicit mount overrides detection
# of the same in /proc/mounts. Setting the mount point to "" will
# disable guest hugepage backing. If desired, multiple mount points can
# be specified at once, separated by comma and enclosed in square
# brackets, for example:
#
# hugetlbfs_mount = ["/dev/
#
# The size of huge page served by specific mount point is determined by
# libvirt at the daemon startup.
#
# NB, within these mount points, guests will create memory backing
# files in a location of $MOUNTPOINT/
#
#hugetlbfs_mount = "/dev/hugepages"
hugetlbfs_mount = "/run/hugepages /kvm"
Seems that there should be 2 mount points for 2M and 1G. However when hugetlbfs_mount is set to ["/run/ hugepages/ kvm_2M" , "/run/hugepages /kvm_1G" ], the same (/run/hugepages /kvm) mount point is created after reboot.