Octavia Loadbalancer security group is using the wrong source ip rule

Bug #1905008 reported by Márton Kiss
16
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Openstack Integrator Charm
Triaged
Undecided
Unassigned

Bug Description

Kubernetes deployment on top of Octavia/Ussuri/OVN using openstack-integrator charm with internal network port security enabled. The Kube API is unreachable through the octavia loadbalancer VIP, because the source ip of the traffic on the kubernetes-master units is not the LB VIP address, but the VRRP ip of the amphora instance (MASTER):

curl -> Octavia Load Balancer (VIP) -> Amphora instance (VRRP_IP) -> Kubernetes master unit (LB member, port 6443)

Loadbalancer layout: https://pastebin.canonical.com/p/KwpcpzNMjV/

Security group of the kubernetes master members:
openstack security group rule list openstack-integrator-bb31aff882ec-kubernetes-master-members
+--------------------------------------+-------------+-----------+----------------+------------+-----------------------+
| ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group |
+--------------------------------------+-------------+-----------+----------------+------------+-----------------------+
| 8cffd2b8-2e88-4c4c-baba-350a7e29d3b3 | tcp | IPv4 | 10.0.20.213/32 | 6443:6443 | None |
| 977f8156-eecb-4aec-b85e-d0375cbf5d78 | None | IPv4 | 0.0.0.0/0 | | None |
| cfdb4104-8538-4829-94e8-4039b4dfda63 | None | IPv6 | ::/0 | | None |
+--------------------------------------+-------------+-----------+----------------+------------+-----------------------+

Port 6443 rule is allowing only the Loadbalancer VIP (10.0.20.213) to pass.

When the port 6443 is opened up by
openstack security group rule create --proto tcp --dst-port 6443 openstack-integrator-bb31aff882ec-kubernetes-master-members

The source ip of VRRP instead of LB VIP is clearly visible on kubernetes-master units:
10:14:06.929571 IP 10.0.20.202.41475 > 10.0.20.67.6443: Flags [P.], seq 847:871, ack 2498, win 1516, options [nop,nop,TS val 2821808451 ecr 4105503376], length 24

Revision history for this message
Peter De Sousa (pjds) wrote :
Revision history for this message
loudgefly (loudgefly) wrote :

Same problem here, tried what suggested on comment https://bugs.launchpad.net/charm-openstack-integrator/+bug/1884995/comments/8 but no luck.

Revision history for this message
Nikolay Vinogradov (nikolay.vinogradov) wrote :

Facing it as well, Bionic / Ussuri, K8s on OpenStack with openstack-integrator charm. Workaround to allow 6443 from vrrp_ip to the backend helped.

Revision history for this message
Nikolay Vinogradov (nikolay.vinogradov) wrote :

As a follow-up to my previous commment: the underlay cloud is Bionic / Ussuri with Neutron OVS.

Revision history for this message
Nobuto Murata (nobuto) wrote :
Download full text (4.4 KiB)

I couldn't reproduce it in the first place, but I think I know what's happening here.

tl;dr `juju expose kubernetes-master` may be the easiest workaround but it will expose the port 8443 to everywhere.

It would be nice to tweak the LB members' rule to make it work out of the box though.

(there is a discussion about the expose feature but it's separate anyway: https://discourse.charmhub.io/t/granular-control-of-application-expose-parameters-in-the-upcoming-2-9-juju-release/3597)

The k8s-master unit/machine is covered by 3 security groups. Juju model, unit/machine, and lb members' one.

$ openstack server show juju-f8a3c5-k8s-on-openstack-0 -c security_groups
+-----------------+-----------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+-----------------------------------------------------------------------------------------+
| security_groups | name='openstack-integrator-f4720ef8a3c5-kubernetes-master-members' |
| | name='juju-6fda9e38-0c87-4a13-88f7-563386e719a9-fad73522-1466-41c0-8570-f4720ef8a3c5-0' |
| | name='juju-6fda9e38-0c87-4a13-88f7-563386e719a9-fad73522-1466-41c0-8570-f4720ef8a3c5' |
+-----------------+-----------------------------------------------------------------------------------------+

The model one is to allow all traffic inside the same model which is not applicable to Amphora instance <-> k8s-master since Amphora is outside of Juju model's security group.

Then, lb members' rule allows traffic to 6443 only from /32 which is LB's vip_address.

$ openstack security group rule list openstack-integrator-f4720ef8a3c5-kubernetes-master-members
+--------------------------------------+-------------+-----------+---------------+------------+-----------+-----------------------+----------------------+
| ID | IP Protocol | Ethertype | IP Range | Port Range | Direction | Remote Security Group | Remote Address Group |
+--------------------------------------+-------------+-----------+---------------+------------+-----------+-----------------------+----------------------+
| 1b5a9b4a-c9f5-4730-a6ee-b53ed672d3aa | None | IPv6 | ::/0 | | egress | None | None |
| 7e7cf826-6732-4f26-89b0-0c27f4b2788e | None | IPv4 | 0.0.0.0/0 | | egress | None | None |
| ea4e048d-a164-431c-b44f-96076a41c859 | tcp | IPv4 | 10.5.5.116/32 | 6443:6443 | ingress | None | None |
+--------------------------------------+-------------+-----------+---------------+------------+-----------+-----------------------+----------------------+

But the unit/machine one can allow access to 6443 from anywhere when the application is expose=true in Juju model. And it's enabled by default in charmstore bundles: https://github.com/charmed-kubernetes/bundle/blob/045be1ee3cf544f67298fd22050cfbca98337bd4/fragments/k8s/core/bundle.yaml#L6-L13

$ openstack s...

Read more...

Revision history for this message
Cory Johns (johnsca) wrote :

As a note, this is not a duplicate of https://bugs.launchpad.net/charm-openstack-integrator/+bug/1884995 because the latter is about LBs created by the cloud-provider-openstack component of K8s, whereas this is about LBs created by the openstack-integrator charm itself.

Revision history for this message
Bayani Carbone (bcarbone) wrote (last edit ):

Still facing this issue on jammy/yoga deployment using stable channel, revision 53.
CK8s bundle is generated via fce bundle builder feature.
Used the workaround Nobuto mentioned i.e. juju expose kubernetes-control-plane. One thing to note also is that the latest charmstore charmed-kubernetes bundle no longer exposes kubernetes-control-plane.

Will file a bug against fce to make bundle builder generate a bundle with `expose: true` for kubernetes-control-plane but this still feels like a bug on openstack-integrator.

Changed in charm-openstack-integrator:
milestone: none → 1.27+ck1
status: New → Triaged
Changed in charm-openstack-integrator:
milestone: 1.27+ck1 → 1.27+ck2
Adam Dyess (addyess)
Changed in charm-openstack-integrator:
milestone: 1.27+ck2 → 1.29
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.