WatchAllModels sends notifications for empty changes
Bug #1747708 reported by
Stuart Bishop
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Canonical Juju |
Fix Released
|
Medium
|
Anastasia |
Bug Description
Using the WatchAllModels API call, I receive notifications from a model containing a single idle cs:postgresql unit every 5 minutes when the update-status hook runs. I suspect it is the charm calling status-set to ensure the workload status message is set correctly, or some similar no-op change.
Using WatchAllModels on a production controller with hundreds of models and thousands of units, I receive notifications faster than I can keep up with.
Changed in juju: | |
status: | In Progress → Fix Committed |
milestone: | none → 2.4-beta1 |
Changed in juju: | |
status: | Fix Committed → Fix Released |
To post a comment you must log in.
It looks like when the charm calls status-set, the 'since' field in the application and unit status is updated, even if there was no change made.
Here are two batches of the notifications I'm seeing sent every five minutes from a minimal deployment. The deltas are identical, apart from the since field in the statuses:
[application change map[subordinate :false charm-url: cs:postgresql- 165 owner-tag: min-units:0 life:alive constraints:map[] status: map[current: active message:Live master (9.5.10) since:2018- 02-06T16: 34:06.401035688 Z version:] workload- version: 9.5.10 model-uuid: 94f4722f- 4a1e-47d0- 8fe8-985f9f45da 08 name:postgresql exposed:false]] cs:postgresql- 165 private- address: 10.0.4. 244 machine-id:0 subordinate:false workload- status: map[version: current:active message:Live master (9.5.10) since:2018- 02-06T16: 34:06.401035688 Z] name:postgresql/0 application: postgresql public- address: 10.0.4. 244 ports:[ map[protocol: tcp number:5432]] port-ranges: [map[from- port:5432 to-port:5432 protocol:tcp]] agent-status: map[current: idle message: since:2018- 02-06T16: 29:53.494424567 Z version:] model-uuid: 94f4722f- 4a1e-47d0- 8fe8-985f9f45da 08]]
[unit change map[series:xenial charm-url:
[application change map[model- uuid:94f4722f- 4a1e-47d0- 8fe8-985f9f45da 08 exposed:false owner-tag: status:map[version: current:active message:Live master (9.5.10) since:2018- 02-06T16: 38:33.795944943 Z] workload- version: 9.5.10 subordinate:false name:postgresql charm-url: cs:postgresql- 165 life:alive min-units:0 constraints:map[]]] :postgresql series:xenial public- address: 10.0.4. 244 private- address: 10.0.4. 244 ports:[ map[protocol: tcp number:5432]] subordinate:false workload- status: map[current: active message:Live master (9.5.10) since:2018- 02-06T16: 38:33.795944943 Z version:] agent-status: map[message: since:2018- 02-06T16: 29:53.494424567 Z version: current:idle] model-uuid: 94f4722f- 4a1e-47d0- 8fe8-985f9f45da 08 name:postgresql/0 charm-url: cs:postgresql- 165 machine-id:0 port-ranges: [map[protocol: tcp from-port:5432 to-port:5432]]]]
[unit change map[application
So this may be a bug in status handling, rather than the WatchAllModels call. It is very common for charms to blindly reset their workload status.