Hi Armando, I fully agree with you, this is not the desired solution, and the solution should be provided by OVS and then included into neutron. I did not mean to push this as a solution, it was just to try if that could work for the use case I have (QoS with trunk ports). That was the reason I just included the tag 'related-bug' instead of close-bug. But perhaps I should not have included any tag at all.
As regards to your comments:
- I see the 'egress-policer implementation for DPDK', but as this is from the openvswitch release notes, does this mean egress from the OVS bridge perspective or from the VM (and in that case ingress to the OVS bridge)? If the second, then I need to figure out how to make use of this at neutron level. If the former, then there is still the gap for the VM egress bw.
- Note I was not targeting any DPDK. What I tried is QoS with Trunk ports, i.e., trying to apply a max bw to one of the subports. The problem here was that, in normal QoS, the egrees-policer bw rule (ingress for OVS view) is applied on the tap device that connects the VM to the bridge (or the veth in case of non ovs-firewall). However, for the trunk port scenario, what is connected to the ovs br-int is, instead of the tap or veth device, a patch port that in turn connect to the ovs trunk-bridge that connects to the VM. And it seems the QoS rules are applied at the kernel level while patch port are fully virtual, so the kernel does not know them.
- A possible use case of QoS and trunk ports could be a kubernetes or OpenShift deployment inside openstack, where you want to leverage neutron functionality by using kuryr. In such a case, you may want some of the containers deployed inside the VMs to have bw limitations. Another use case would be for VNFs, where instead of having multiple vNICs, VLAN-Aware-VMs needs to be used. In that case, it will also be desired to have some QoS control for those VNF VMs.
Hi Armando, I fully agree with you, this is not the desired solution, and the solution should be provided by OVS and then included into neutron. I did not mean to push this as a solution, it was just to try if that could work for the use case I have (QoS with trunk ports). That was the reason I just included the tag 'related-bug' instead of close-bug. But perhaps I should not have included any tag at all.
As regards to your comments:
- I see the 'egress-policer implementation for DPDK', but as this is from the openvswitch release notes, does this mean egress from the OVS bridge perspective or from the VM (and in that case ingress to the OVS bridge)? If the second, then I need to figure out how to make use of this at neutron level. If the former, then there is still the gap for the VM egress bw.
- Note I was not targeting any DPDK. What I tried is QoS with Trunk ports, i.e., trying to apply a max bw to one of the subports. The problem here was that, in normal QoS, the egrees-policer bw rule (ingress for OVS view) is applied on the tap device that connects the VM to the bridge (or the veth in case of non ovs-firewall). However, for the trunk port scenario, what is connected to the ovs br-int is, instead of the tap or veth device, a patch port that in turn connect to the ovs trunk-bridge that connects to the VM. And it seems the QoS rules are applied at the kernel level while patch port are fully virtual, so the kernel does not know them.
- A possible use case of QoS and trunk ports could be a kubernetes or OpenShift deployment inside openstack, where you want to leverage neutron functionality by using kuryr. In such a case, you may want some of the containers deployed inside the VMs to have bw limitations. Another use case would be for VNFs, where instead of having multiple vNICs, VLAN-Aware-VMs needs to be used. In that case, it will also be desired to have some QoS control for those VNF VMs.