you dont need to do db edits for this.
the correct way to work around this is via the neutron api
NEUTRON_API_URL=$(openstack endpoint list --service neutron --interface public -c URL -f value)
TOKEN=$(openstack token issue -c id -f value)
curl -X DELETE -H "X-Auth-Token: $TOKEN" ${NEUTRON_API_URL}/v2.0/ports/${port uuid}/bindings/${inactive host name}
we likely can handel this in nova by catching the conflict and deleting and recreating the inactive port binding.
we have temporally documented teh workaround downstream in a kcs artical https://access.redhat.com/solutions/5827011
that i belive is public but yes i think the issue is that nova is not always cleaning up the inactive port bindings if a
a migration fails.
if you later try to migrate to the same host it will then fail as repoted.
triaging as medium as this bug will only happen if you live migrate a vm and it failes for another reason and then you try to live migrate to the same host.
so most operaters should not hit this but it can happen.
you dont need to do db edits for this.
the correct way to work around this is via the neutron api
NEUTRON_ API_URL= $(openstack endpoint list --service neutron --interface public -c URL -f value) API_URL} /v2.0/ports/ ${port uuid}/bindings/ ${inactive host name}
TOKEN=$(openstack token issue -c id -f value)
curl -X DELETE -H "X-Auth-Token: $TOKEN" ${NEUTRON_
we likely can handel this in nova by catching the conflict and deleting and recreating the inactive port binding.
we have temporally documented teh workaround downstream in a kcs artical /access. redhat. com/solutions/ 5827011
https:/
that i belive is public but yes i think the issue is that nova is not always cleaning up the inactive port bindings if a
a migration fails.
if you later try to migrate to the same host it will then fail as repoted.
triaging as medium as this bug will only happen if you live migrate a vm and it failes for another reason and then you try to live migrate to the same host.
so most operaters should not hit this but it can happen.