The max_users_per_team setting can't be increased beyond 1000 if use_canonical_defaults is True
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
charm-k8s-mattermost |
Fix Released
|
High
|
Barry Price |
Bug Description
Modifying the charm setting `max_users_
Having just attempted this though, nothing appears to have happened, and the running unit and pod appear to have been unaffected.
Manually editing the deployment (e.g. `kubectl edit deployment mattermost`) shows the old value (for MM_TEAMSETTINGS
Modifying the value to the desired number within that `kubectl edit` session and saving it to apply seems to be a valid short-term workaround, but this can then come undone later when the charm notices the discrepancy.
Unsetting `use_canonical_
Related branches
- Tom Haddon (community): Approve
- 🤖 prod-jenkaas-is (community): Approve (continuous-integration)
- Canonical IS Reviewers: Pending requested
-
Diff: 49 lines (+2/-7)3 files modifiedconfig.yaml (+1/-1)
src/charm.py (+0/-4)
tests/unit/test_charm.py (+1/-2)
description: | updated |
description: | updated |
description: | updated |
summary: |
- Modifying max_users_per_team has no effect + The max_users_per_team setting can't be increased beyond 1000 if + use_canonical_defaults is True |
description: | updated |
description: | updated |
description: | updated |
Changed in charm-k8s-mattermost: | |
status: | New → Confirmed |
Changed in charm-k8s-mattermost: | |
status: | Confirmed → Fix Committed |
assignee: | nobody → Barry Price (barryprice) |
importance: | Undecided → High |
Changed in charm-k8s-mattermost: | |
status: | Fix Committed → Fix Released |
Okay, I've dug into the code and it appears we do read this value from config, but then override it via the _update_ pod_spec_ for_canonical_ defaults method if `use_canonical_ defaults` is set:
https:/ /git.launchpad. net/charm- k8s-mattermost/ tree/src/ charm.py# n399
I think we need to either rethink that, or increase the default value from 1000 to say 2000 - 1000 is no longer enough for our deployments.