* Failed to get metadata during VM launch - consistently and only on the "affected" network. Other networks like "unaffected" are OK.
* Missing metadata route inside VM
* After adding the route manually to the .2 IP we can ping/curl the metadata endpoint with no issues, so it seems the route is the only thing missing.
* The workaround of adding the metadata route explicitly to the relevant router allows new VMs in the affected network to get metadata without problems.
These are the current packages:
ii neutron-common 2:16.4.2-0ubuntu1
ii neutron-ovn-metadata-agent 2:16.4.2-0ubuntu1
ii python3-neutron 2:16.4.2-0ubuntu1
ii python3-neutron-lib 2.3.0-0ubuntu1
ii python3-neutronclient 1:7.1.1-0ubuntu1
I am attaching the information requested above for an "affected" and "unaffected" network. The main difference I see is that the "unaffected" subnet has the following option in the ovn-nb that is missing from the "affected" subnet:
The two patches you mention are indeed included in python3-neutron 2:16.4.2-0ubuntu1. I additionally confirmed by checking /usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py
Regarding "ovn_metadata_enabled", I didn't find it set to "true" in any config under /etc/neutron. I can only see the default commented out and no mention in neutron_ovn_metadata_agent.ini, which has the ovs/ovn config in it (but I am no expert)
The creation logs are no longer available. The ports for the .2 IPs are created in the subnet, and they do have a device_id of ovnmeta-<networkid>, but the device_owner is network:dhcp and not network:distributed as you seem to be expecting. I added the output of `port show` for them as well. Note that other networks on the same compute nodes have no issues providing metadata, including the "unaffected" network (data attached).
We hit this issue today as well. Same symptoms:
* Failed to get metadata during VM launch - consistently and only on the "affected" network. Other networks like "unaffected" are OK.
* Missing metadata route inside VM
* After adding the route manually to the .2 IP we can ping/curl the metadata endpoint with no issues, so it seems the route is the only thing missing.
* The workaround of adding the metadata route explicitly to the relevant router allows new VMs in the affected network to get metadata without problems.
These are the current packages:
ii neutron-common 2:16.4.2-0ubuntu1 ovn-metadata- agent 2:16.4.2-0ubuntu1 neutronclient 1:7.1.1-0ubuntu1
ii neutron-
ii python3-neutron 2:16.4.2-0ubuntu1
ii python3-neutron-lib 2.3.0-0ubuntu1
ii python3-
I am attaching the information requested above for an "affected" and "unaffected" network. The main difference I see is that the "unaffected" subnet has the following option in the ovn-nb that is missing from the "affected" subnet:
classless_ static_ route=" {169.254. 169.254/ 32,10.131. 83.2, 0.0.0.0/ 0,10.131. 83.1}"
The two patches you mention are indeed included in python3-neutron 2:16.4.2-0ubuntu1. I additionally confirmed by checking /usr/lib/ python3/ dist-packages/ neutron/ plugins/ ml2/drivers/ ovn/mech_ driver/ ovsdb/ovn_ client. py
Regarding "ovn_metadata_ enabled" , I didn't find it set to "true" in any config under /etc/neutron. I can only see the default commented out and no mention in neutron_ ovn_metadata_ agent.ini, which has the ovs/ovn config in it (but I am no expert)
/etc/neutron# grep -r ovn_metadata #ovn_metadata_ enabled = false
ovn.ini:
The creation logs are no longer available. The ports for the .2 IPs are created in the subnet, and they do have a device_id of ovnmeta- <networkid> , but the device_owner is network:dhcp and not network:distributed as you seem to be expecting. I added the output of `port show` for them as well. Note that other networks on the same compute nodes have no issues providing metadata, including the "unaffected" network (data attached).