libvirtd going into a tight loop causing instances to not transition to ACTIVE
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
libvirt (Ubuntu) |
Fix Released
|
Undecided
|
Unassigned | ||
Bionic |
Incomplete
|
Undecided
|
Unassigned |
Bug Description
Description
===========
This is current master branch (wallaby) of OpenStack.
We seen this regularly, but it's intermittent. Said another way: We see it on jobs daily, but it's not every run.
We are seeing nova instances that do not transition to ACTIVE inside five minutes. Investigating this led us to find that libvirtd seems to be going into a tight loop on an instance delete.
The 136MB log is here: https:/
The overall job logs are here:
https:/
When running the Octavia scenario test suite, we occasionally see nova instances fail to become ACTIVE in a timely manner, causing timeouts and failures. In investigating this issue we found the libvirtd log was 136MB.
Most of the file is full of this repeating:
2020-10-28 23:45:06.330+0000: 20852: debug : qemuMonitorIO:767 : Error on monitor internal error: End of file from qemu monitor
2020-10-28 23:45:06.330+0000: 20852: debug : qemuMonitorIO:788 : Triggering EOF callback
2020-10-28 23:45:06.330+0000: 20852: debug : qemuProcessHand
2020-10-28 23:45:06.330+0000: 20852: debug : qemuProcessHand
Here is a snippet for the lead in to the repeated lines:
http://
It appears to be a tight loop, repeating many times per second.
Eventually it does stop and things seem to go back to normal in nova.
Here is the snippet of the end of the loop in the log:
http://
There is an interesting sequence of log lines in the compute log for the nova instance (instance-00000001) that is reported in the repeated libvirt logs.
Oct 28 23:45:02.776968 nested- virt-ubuntu- bionic- vexxhost- ca-ymq- 1-0021169707 nova-compute[ 16413]: INFO nova.virt. libvirt. driver [-] [instance: 7a343084- ac33-4dcb- 8343-744fe2c1fb 26] Instance spawned successfully. virt-ubuntu- bionic- vexxhost- ca-ymq- 1-0021169707 nova-compute[ 16413]: INFO nova.compute. manager [None req-e609f22f- 0c5b-4e75- 8fed-2cc5ea78de cb admin admin] [instance: 7a343084- ac33-4dcb- 8343-744fe2c1fb 26] Took 326.09 seconds to spawn the instance on the hypervisor. virt-ubuntu- bionic- vexxhost- ca-ymq- 1-0021169707 nova-compute[ 16413]: DEBUG nova.compute. manager [None req-e609f22f- 0c5b-4e75- 8fed-2cc5ea78de cb admin admin] [instance: 7a343084- ac33-4dcb- 8343-744fe2c1fb 26] Checking state {{(pid=16413) _get_power_state /opt/stack/ nova/nova/ compute/ manager. py:1557} } virt-ubuntu- bionic- vexxhost- ca-ymq- 1-0021169707 nova-compute[ 16413]: DEBUG nova.compute. manager [None req-1efcbd1f- 1af9-4e08- 8572-e04879fa08 fb None None] [instance: 7a343084- ac33-4dcb- 8343-744fe2c1fb 26] Checking state {{(pid=16413) _get_power_state /opt/stack/ nova/nova/ compute/ manager. py:1557} } virt-ubuntu- bionic- vexxhost- ca-ymq- 1-0021169707 nova-compute[ 16413]: DEBUG nova.compute. manager [None req-1efcbd1f- 1af9-4e08- 8572-e04879fa08 fb None None] [instance: 7a343084- ac33-4dcb- 8343-744fe2c1fb 26] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: deleting, current DB power_state: 0, VM power_state: 1 {{(pid=16413) handle_ lifecycle_ event /opt/stack/ nova/nova/ compute/ manager. py:1294} } virt-ubuntu- bionic- vexxhost- ca-ymq- 1-0021169707 nova-compute[ 16413]: INFO nova.compute. manager [None req-1efcbd1f- 1af9-4e08- 8572-e04879fa08 fb None None] [instance: 7a343084- ac33-4dcb- 8343-744fe2c1fb 26] During sync_power_state the instance has a pending task (deleting). Skip. virt-ubuntu- bionic- vexxhost- ca-ymq- 1-0021169707 nova-compute[ 16413]: DEBUG nova.compute.utils [None req-e609f22f- 0c5b-4e75- 8fed-2cc5ea78de cb admin admin] [instance: 7a343084- ac33-4dcb- 8343-744fe2c1fb 26] Conflict updating instance 7a343084- ac33-4dcb- 8343-744fe2c1fb 26. Expected: {'task_state': ['spawning']}. Actual: {'task_state': 'deleting'} {{(pid=16413) notify_ about_instance_ usage /opt/stack/ nova/nova/ compute/ utils.py: 429}} virt-ubuntu- bionic- vexxhost- ca-ymq- 1-0021169707 nova-compute[ 16413]: DEBUG nova.compute. manager [None req-e609f22f- 0c5b-4e75- 8fed-2cc5ea78de cb admin admin] [instance: 7a343084- ac33-4dcb- 8343-744fe2c1fb 26] Instance disappeared during build. {{(pid=16413) _do_build_ and_run_ instance /opt/stack/ nova/nova/ compute/ manager. py:2266} }
Oct 28 23:45:02.776968 nested-
Oct 28 23:45:02.777289 nested-
Oct 28 23:45:02.828879 nested-
Oct 28 23:45:02.832224 nested-
Oct 28 23:45:02.865991 nested-
Oct 28 23:45:03.052749 nested-
Oct 28 23:45:03.065608 nested-
nova reports that the instance is built successfully and right after it it reports that the instance is disappeared during build.
It seems that the whole libvirt error loop happens after nova resumed the guest in libvirt. The fact that the overall instance boot took mo...