Need a way to differentiate between resource limits and requests in kubernetes constraints

Bug #1919976 reported by Tom Haddon
18
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Triaged
High
Unassigned

Bug Description

Deploying a kubernetes charm with constraints of "mem=100M cpu-power=100" (for instance) will produce the following (from describe pod):

Containers:
[...]
    Limits:
      cpu: 100m
      memory: 100Mi
    Requests:
      cpu: 100m
      memory: 100Mi

However, being able to set these to different values would be useful, and is good practice in Kubernetes. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ for more details.

Tags: canonical-is
Revision history for this message
Harry Pidcock (hpidcock) wrote :

Thanks Tom, we have this on out roadmap, I'm going to target this to 2.9.1 as its fairly important.

Changed in juju:
status: New → Triaged
importance: Undecided → High
milestone: none → 2.9.1
Ian Booth (wallyworld)
Changed in juju:
milestone: 2.9.1 → 2.9.2
Changed in juju:
milestone: 2.9.2 → 2.9.3
Changed in juju:
milestone: 2.9.3 → 2.9.4
Revision history for this message
Giuseppe Petralia (peppepetra) wrote :

Is the support for hugepages constraints in the roadmap as well?

Changed in juju:
milestone: 2.9.4 → 2.9.5
Revision history for this message
Ian Booth (wallyworld) wrote :
John A Meinel (jameinel)
Changed in juju:
milestone: 2.9.5 → 2.9-next
Tom Haddon (mthaddon)
tags: added: canonical-is
Harry Pidcock (hpidcock)
Changed in juju:
milestone: 2.9-next → 3.1-beta1
Changed in juju:
milestone: 3.1-beta1 → 3.2-beta1
Changed in juju:
milestone: 3.2-beta1 → 3.2-rc1
Changed in juju:
milestone: 3.2-rc1 → 3.2.0
Changed in juju:
milestone: 3.2.0 → 3.2.1
Revision history for this message
Paulo Machado (paulomachado) wrote :

Hello there! Is this fix planned to be backported to 2.9 series? Thanks

Changed in juju:
milestone: 3.2.1 → 3.2.2
Changed in juju:
milestone: 3.2.2 → 3.2.3
Changed in juju:
milestone: 3.2.3 → 3.2.4
Revision history for this message
Alex Lutay (taurus) wrote :

Dear Juju Team,

Can you please consider to increase priority for this ticket.
It will be a hot topic for the next customer mysql-k8s and postgresql-k8s DB deployment by Juju.

We have to deploy DB with RAM constraints to properly perform RAM management inside the unit (auto-tuning) but Juju applies the same constrains on "charm" and "workload" which is resource in-optimal (charm container do not need constraints on 64GB RAM), but also K8s do not deploy charm on node due to resource limits. Example:

> `juju deploy mysql-k8s --constraints mem=64G`
^ Pod will NOT be scheduled for K8s node with 64GB.
The constraint requires 128GB+ availability.

> `juju deploy mysql-k8s --constraints mem=32G`
^ Pod still will NOT be scheduled for K8s node with 64GB RAM! <== Boom.
Charm DB application cannot be deployed at all if all your K8s nodes have 64GB RAM.

Tnx!

Revision history for this message
John A Meinel (jameinel) wrote : Re: [Bug 1919976] Re: Need a way to differentiate between resource limits and requests in kubernetes constraints

If the charm wouldn't be scheduled with '32GB' constraint but 64GB nodes,
are you sure that it would be scheduled without juju in the mix and a 64GB
request? That at least sounds like K8s is reserving some memory on the node
for other activities, and you need to go to something like 30GB. At least
for testing purposes.

As far as actually being able to prioritize this, we certainly understand
that you have a business case for it, but not really the concrete
implications and whether there is a workaround, to be able to prioritize it
relative to other activities.

We certainly should get this into our product feedback prioritization
queue, though.

On Tue, Sep 5, 2023 at 10:06 AM Alex Lutay <email address hidden>
wrote:

> Dear Juju Team,
>
> Can you please consider to increase priority for this ticket.
> It will be a hot topic for the next customer mysql-k8s and postgresql-k8s
> DB deployment by Juju.
>
> We have to deploy DB with RAM constraints to properly perform RAM
> management inside the unit (auto-tuning) but Juju applies the same
> constrains on "charm" and "workload" which is resource in-optimal (charm
> container do not need constraints on 64GB RAM), but also K8s do not
> deploy charm on node due to resource limits. Example:
>
>
> > `juju deploy mysql-k8s --constraints mem=64G`
> ^ Pod will NOT be scheduled for K8s node with 64GB.
> The constraint requires 128GB+ availability.
>
> > `juju deploy mysql-k8s --constraints mem=32G`
> ^ Pod still will NOT be scheduled for K8s node with 64GB RAM! <== Boom.
> Charm DB application cannot be deployed at all if all your K8s nodes have
> 64GB RAM.
>
> Tnx!
>
> --
> You received this bug notification because you are subscribed to
> Canonical Juju.
> Matching subscriptions: juju bugs
> https://bugs.launchpad.net/bugs/1919976
>
> Title:
> Need a way to differentiate between resource limits and requests in
> kubernetes constraints
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/juju/+bug/1919976/+subscriptions
>
>

Revision history for this message
Alex Lutay (taurus) wrote :

> ... That at least sounds like K8s is reserving some memory on the node for other activities...

Absolutely! Details: https://github.com/canonical/mysql-k8s-operator/issues/254#issuecomment-1632365691

TL;DR:
```
> kubectl describe node gke-taurus-20152-default-pool-dd74c68c-7cq0
...
Capacity:
...
  memory: 15358168Ki
Allocatable:
...
  memory: 12658904Ki
```

So, K8s will NOT schedule DB pod launch (juju deploy mysql-k8s --constraints mem=8GB) on K8s node with 16GB real RAM, as 12GB is "Allocatable" only and Juju requests 8GB for charm container + 8GB for workload container.

Revision history for this message
Simon Aronsson (0x12b) wrote :
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.