Yep, Octavia is a heavy service, the amphora backend LB will consume plenty of resources.
The key issue of how to use the raw kube-proxy with kuryr is the connectivity of host node and pod, right?
As the network topology graph [1], I add a neutron port (tap device) to host node, then the host node can be treated as a common leaf device (same level as pod). So, the tap device is necessary, the traffic between host node and pod can be seen as E/W traffic. In addition to, it's important to note that all ip addresses of the host nodes can not overlapping with all pods's.
For N/S traffic, We can leave it entirely to kube-proxy to handle, means the cluster ip allocated by kube-proxy, kuryr needn't to care. Kuryr only process the traffic of pods and host nodes.
For metallb, as far as I understand it just implement loadbalancer service. It can be as a supplement to manage the external traffic.
Yep, Octavia is a heavy service, the amphora backend LB will consume plenty of resources.
The key issue of how to use the raw kube-proxy with kuryr is the connectivity of host node and pod, right?
As the network topology graph [1], I add a neutron port (tap device) to host node, then the host node can be treated as a common leaf device (same level as pod). So, the tap device is necessary, the traffic between host node and pod can be seen as E/W traffic. In addition to, it's important to note that all ip addresses of the host nodes can not overlapping with all pods's.
For N/S traffic, We can leave it entirely to kube-proxy to handle, means the cluster ip allocated by kube-proxy, kuryr needn't to care. Kuryr only process the traffic of pods and host nodes.
For metallb, as far as I understand it just implement loadbalancer service. It can be as a supplement to manage the external traffic.
[1] https:/ /paste. opendev. org/show/ bcmCY9L2hXU4KKF 7ANZR/