- With the fix there is no more traceback or crash, but I'm not sure I can confirm it's actually successfully notified the instance. If I run `lsblk` on the openstack instance after live resizing the instance, it doesn't reflect the updated size. It does show the updated size if I reboot the openstack instance though.
- Is this how we want to fix the issue? It seems that if it does support the `auth_section = "SECTION_NAME"` format, this would result in less duplication. However I couldn't get it to work that way, and I can't find docs about how this is supposed to work. Also https://github.com/openstack/cinder/blob/master/cinder/compute/nova.py#L115-L129 directly accesses the nova group from config; I don't see if/where it resolves through an `auth_section` key.
Update: I managed to successfully reproduce the issue and Gabriel's fix by cherry picking https:/ /github. com/gabriel- samfira/ charm-cinder/ commit/ 82505257b129db9 1b1aabeac744e02 c900852287 on stable/xena. But now I have two questions:
- With the fix there is no more traceback or crash, but I'm not sure I can confirm it's actually successfully notified the instance. If I run `lsblk` on the openstack instance after live resizing the instance, it doesn't reflect the updated size. It does show the updated size if I reboot the openstack instance though.
- Is this how we want to fix the issue? It seems that if it does support the `auth_section = "SECTION_NAME"` format, this would result in less duplication. However I couldn't get it to work that way, and I can't find docs about how this is supposed to work. Also https:/ /github. com/openstack/ cinder/ blob/master/ cinder/ compute/ nova.py# L115-L129 directly accesses the nova group from config; I don't see if/where it resolves through an `auth_section` key.