the effect of this bug is that if you boot a vm with CONF.force_config_drive=True then hardreboot the
vm it will nolonger have access to the config drive. if the deployement does not have the metadta service or is use config drive for file injection then this is a regreussion and possibel data loss.
for cloud without the metadata service deployed like rackspace this would mean that vms nolonger have acces to vendordata or other metadtate like deivce role tagging ater there first boot which is incorrect.
there is a bug in the patch that was merged on master which im going to fix.
once i file the new bug.
basicaly these two lines need to be swapped /github. com/openstack/ nova/blob/ 86524773b8cd3a5 2c98409c7ca183b 4e1873e2b8/ nova/compute/ manager. py#L1757- L1758
https:/
or required_by will always be false if the a config drive is not requested on the spawn api. /review. opendev. org/#/c/ 659703/ 8/nova/ virt/configdriv e.py@169
https:/
before https:/ /review. opendev. org/#/c/ 659703/ 8 required_by did not depend in instance. launched_ at now it does.
the effect of this bug is that if you boot a vm with CONF.force_ config_ drive=True then hardreboot the
vm it will nolonger have access to the config drive. if the deployement does not have the metadta service or is use config drive for file injection then this is a regreussion and possibel data loss.
for cloud without the metadata service deployed like rackspace this would mean that vms nolonger have acces to vendordata or other metadtate like deivce role tagging ater there first boot which is incorrect.