Just got hit by this too. Our prod cluster had failures due to node readiness, due to this:
2023/02/12 14:28:47 [alert] 3006059#3006059: *11584397 socket() failed (24: Too many open files) while connecting to upstream, client: 10.88.0.62, server: server_443, request: "GET /api/v1/nodes/juju-ed4383-16?timeout=10s HTTP/2.0", upstream: "https://10.88.0.42:6443/api/v1/nodes/juju-ed4383-16?timeout=10s", host: "10.88.0.41:443"
2023/02/12 14:28:47 [alert] 3006059#3006059: *11584397 socket() failed (24: Too many open files) while connecting to upstream, client: 10.88.0.62, server: server_443, request: "GET /api/v1/nodes/juju-ed4383-16?timeout=10s HTTP/2.0", upstream: "https://10.88.0.39:6443/api/v1/nodes/juju-ed4383-16?timeout=10s", host: "10.88.0.41:443"
Etc.
I'm aware that it's my bad for not having HA on the kubeapi-load-balancer component to share the load, but it'd be great to have this be tunable.
Just got hit by this too. Our prod cluster had failures due to node readiness, due to this:
2023/02/12 14:28:47 [alert] 3006059#3006059: *11584397 socket() failed (24: Too many open files) while connecting to upstream, client: 10.88.0.62, server: server_443, request: "GET /api/v1/ nodes/juju- ed4383- 16?timeout= 10s HTTP/2.0", upstream: "https:/ /10.88. 0.42:6443/ api/v1/ nodes/juju- ed4383- 16?timeout= 10s", host: "10.88.0.41:443"
2023/02/12 14:28:47 [alert] 3006059#3006059: *11584397 socket() failed (24: Too many open files) while connecting to upstream, client: 10.88.0.62, server: server_443, request: "GET /api/v1/ nodes/juju- ed4383- 16?timeout= 10s HTTP/2.0", upstream: "https:/ /10.88. 0.39:6443/ api/v1/ nodes/juju- ed4383- 16?timeout= 10s", host: "10.88.0.41:443"
Etc.
I'm aware that it's my bad for not having HA on the kubeapi- load-balancer component to share the load, but it'd be great to have this be tunable.