1. volume has a specific os controller node as `os-vol-host-attr:host`
2. that os controller node gets maintenance
3. vm instance with attached volume gets deleted
3. nova throws: openstack.nova nova-compute c84d9828-1277-457a-828d-db7dc3c03216 [instance: d5832d72-0b70-422e-ba94-12b24f1a75e1] Ignoring unknown cinder exception for volume 615d4759-bed3-4a84-91a8-8fce612bfb2a: Gateway Time-out (HTTP 504): cinderclient.exceptions.ClientException: Gateway Time-out (HTTP 504)
the volume than still exists and still claims to be attached to a nonexistant vm.
we can of course clean this up manually.
afaiu there needs to be added some active/active cinder deployment with a coordinator service like pacemaker or etcd to kolla-ansible?
would it be possible to mimic somehow what tripleo does? as far as I understand they have implemented active active cinder deployment with etcd coordinator?
Hi,
can someone provide an update on this bug?
Because we hit this in real life deployments:
1. volume has a specific os controller node as `os-vol- host-attr: host` 1277-457a- 828d-db7dc3c032 16 [instance: d5832d72- 0b70-422e- ba94-12b24f1a75 e1] Ignoring unknown cinder exception for volume 615d4759- bed3-4a84- 91a8-8fce612bfb 2a: Gateway Time-out (HTTP 504): cinderclient. exceptions. ClientException : Gateway Time-out (HTTP 504)
2. that os controller node gets maintenance
3. vm instance with attached volume gets deleted
3. nova throws: openstack.nova nova-compute c84d9828-
the volume than still exists and still claims to be attached to a nonexistant vm.
we can of course clean this up manually.
afaiu there needs to be added some active/active cinder deployment with a coordinator service like pacemaker or etcd to kolla-ansible?
would it be possible to mimic somehow what tripleo does? as far as I understand they have implemented active active cinder deployment with etcd coordinator?