Comment 5 for bug 1962755

Revision history for this message
John A Meinel (jameinel) wrote :

I believe they ran into this bug again (today) when upgrading a Juju controller in AWS.

The fix in the PR does appear to "do the right thing". With the caveat that when you upgrade a controller (which is the thing that changes what config hashes are being computed, and thus whether you trigger config-changed), you don't also upgrade the unit agent (where this fix was done).

And when you do upgrade the unit agent, it won't compute a *new* config hash, so it won't re-run config changed one more time for it to fix the bug when you upgraded the controller.

In the AWS upgrade it looks like they did
 2.9.18 to 2.9.32

So the sequencing is:
Controller Agent
2.9.18 2.9.18
2.9.32 2.9.18 Notices a change in the config hash, runs config-changed
   the charm calls close-port 443, open-port 443. 2.9.18 still sees one of
   those as a no-op (because the port is already open/closed) causing the port
   to toggle
2.9.32 2.9.32 nothing to be done because the config hasn't changed

Note also that `juju unexpose` ; `juju expose` won't fix anything, because it isn't that Juju told the cloud the wrong thing. It is that juju interpreted what the charm told it should be opened incorrectly.
You probably could do `juju run --unit X -- hooks/config-changed` after the upgrade, and have that fix the problem.