for (2), backends with different clustering config should be grouped in different cinder charm apps. For example:
charm: cinder, app: cinder-api
charm: cinder, app: cinder-volume-aa-drivers
charm: cinder-ceph, subordinate of cinder-volume-aa-drivers
charm: cinder, app: cinder-volume-ap-drivers
charm: cinder-purestorage, subordinate of cinder-volume-ap-drivers
charm: cinder-netapp, subordinate of cinder-volume-ap-drivers
"ap" above stands for "active-passive". "aa" for "active-active"
I have seen an environment configured as above without issues.
for (1), you are correct, and this was discussed when I was implementing the change [0], however, we didn't consider backwards compatibillity because the only cinder driver that came to our minds and would not be affected was the cinder-netapp one. We missed the cinder-purestorage one that was already existing. Let's enumerate the scenarios:
a) pure-storage deployment >= victoria: this will be changed to use cluster instead of host but will remain in clustered mode without errors, as the driver >= victoria supports AA HA
b) pure-storage deployment < victoria: this will be changed to stateless=false and therefore the environment will be affected. Ideally, since the environment is running in an unsupported config, the operators should try to migrate to a non-clustered mode and active-passive set up. The workaround for backwards compatibility that I can see here is a charm config option (on the cinder-purestorage sub-charm) to override the stateless config to address this case.
@Freyes is working on a patch as part [1] and I believe that workaround could be included.
Hi Nobuto,
for (2), backends with different clustering config should be grouped in different cinder charm apps. For example:
charm: cinder, app: cinder-api volume- aa-drivers volume- aa-drivers volume- ap-drivers volume- ap-drivers volume- ap-drivers
charm: cinder, app: cinder-
charm: cinder-ceph, subordinate of cinder-
charm: cinder, app: cinder-
charm: cinder-purestorage, subordinate of cinder-
charm: cinder-netapp, subordinate of cinder-
"ap" above stands for "active-passive". "aa" for "active-active"
I have seen an environment configured as above without issues.
for (1), you are correct, and this was discussed when I was implementing the change [0], however, we didn't consider backwards compatibillity because the only cinder driver that came to our minds and would not be affected was the cinder-netapp one. We missed the cinder-purestorage one that was already existing. Let's enumerate the scenarios:
a) pure-storage deployment >= victoria: this will be changed to use cluster instead of host but will remain in clustered mode without errors, as the driver >= victoria supports AA HA
b) pure-storage deployment < victoria: this will be changed to stateless=false and therefore the environment will be affected. Ideally, since the environment is running in an unsupported config, the operators should try to migrate to a non-clustered mode and active-passive set up. The workaround for backwards compatibility that I can see here is a charm config option (on the cinder-purestorage sub-charm) to override the stateless config to address this case.
@Freyes is working on a patch as part [1] and I believe that workaround could be included.
[0] https:/ /review. opendev. org/c/openstack /charm- cinder/ +/811472 /bugs.launchpad .net/charm- cinder- purestorage/ +bug/1947702
[1] https:/