For context, it appears we're seeing this issue when switching types of compute/networking resources are configured for a given hyperconverged node.
Imagine a scenario where you have a number of sriov nodes with bond0 as the br-data and non-sriov nodes with bond1 as the br-data. If you also have ceph-osd deployed to these hyperconverged nodes and remove the sriov openvswitch config and deploy the non-sriov openvswitch config, the br-data:bond0 to br-data:bond1 data-port mapping change isn't properly affected by the removal/addition of the charm to the already deployed unit.
For context, it appears we're seeing this issue when switching types of compute/networking resources are configured for a given hyperconverged node.
Imagine a scenario where you have a number of sriov nodes with bond0 as the br-data and non-sriov nodes with bond1 as the br-data. If you also have ceph-osd deployed to these hyperconverged nodes and remove the sriov openvswitch config and deploy the non-sriov openvswitch config, the br-data:bond0 to br-data:bond1 data-port mapping change isn't properly affected by the removal/addition of the charm to the already deployed unit.