Ceilometer-agent-compute service not running after OpenStack upgrade

Bug #1927277 reported by Camille Rodriguez
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Ceilometer Agent Charm
Fix Released
High
Aurelien Lourot
OpenStack Nova Compute Charm
Fix Released
High
Aurelien Lourot
Train
Fix Committed
Undecided
Unassigned
Ussuri
Fix Committed
Undecided
Unassigned
Victoria
Fix Committed
Undecided
Unassigned
Wallaby
Fix Committed
Undecided
Unassigned
Xena
Fix Released
Undecided
Unassigned

Bug Description

On a deployment, most or all of the ceilometer agent units are blocked in a state with "ceilometer-agent-compute" not running. Upon 5 redeployment, at least one unit every time ran into that status.
The workaround was simply to run a systemctl restart ceilometer-agent-compute.service on the affected units.

I will gather and upload logs in the next 24h.

Changed in charm-ceilometer-agent:
status: New → Incomplete
Revision history for this message
Chris MacNaughton (chris.macnaughton) wrote :

We seem to have hit this on a recent ceilometer-agent review, the focal-ussuri bundle shows this in the pause-resume tests: https://review.opendev.org/c/openstack/charm-ceilometer-agent/+/789876

Changed in charm-ceilometer-agent:
status: Incomplete → Confirmed
Revision history for this message
Hybrid512 (walid-moghrabi) wrote (last edit ):

I can confirm this bug too.
Since charm 21.04, everytime I deploy a new cluster with the same bundle, I end with "Services not running that should be: ceilometer-agent-compute".

Weird thing on my side is that I can't even restart the service by hand on the unit.
When sshing a ceilometer-agent unit, trying to run a "systemctl restart ceilometer-agent-compute.service" gives me this :

root@juju-5073fb-5-lxd-3:~# systemctl restart ceilometer-agent-compute.service
Failed to restart ceilometer-agent-compute.service: Unit nova-compute.service not found.

However, service exists :

root@juju-5073fb-5-lxd-3:~# systemctl status ceilometer-agent-compute.service
● ceilometer-agent-compute.service - Ceilometer Agent Compute
     Loaded: loaded (/lib/systemd/system/ceilometer-agent-compute.service; enabled; vendor preset: enabled)
     Active: inactive (dead)

My fix is to reboot the unit then it starts properly.

I have to say also that there is only one kind of agents that are failing this way, these are the agents subordinated to the Glance unit.
I don't have any issue with the agents subordinates to nova-compute units.

Deployment informations :
- Deployed on MaaS (3.0) with Juju (2.9.5)
- Deployment for focal-ussuri

Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote :

I was able to reproduce this by running our automated OpenStack upgrade tests [0] upgrading a large OpenStack model from focal-ussuri to focal-victoria. Investigating.

[0] https://github.com/openstack-charmers/charmed-openstack-tester/blob/master/tests/openstack-upgrade/tests/tests.yaml

Changed in charm-ceilometer-agent:
status: Confirmed → In Progress
importance: Undecided → High
assignee: nobody → Aurelien Lourot (aurelien-lourot)
Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote :

Similarly to #1 in my case this is related to the pause/resume functionality. It seems like when we pause the principle unit (nova-compute) in order to perform the OpenStack upgrade, the ceilometer-agent-compute gets properly stopped and successfully terminates (exits 0). But later on nothing seems to re-attempt to restart that service, although we resume the principle unit. So it seems like we "forget" to restart that service somewhere. Investigating.

tags: added: openstack-upgrade
Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote :
Download full text (5.1 KiB)

The corresponding smoke test (mentioned in #1) passed for me on focal-ussuri:

2021-07-21 13:04:35 [INFO] test_901_pause_resume (zaza.openstack.charm_tests.ceilometer.tests.CeilometerTest)
2021-07-21 13:04:35 [INFO] Run pause and resume tests.
2021-07-21 13:04:35 [INFO] ...
2021-07-21 13:05:05 [INFO] Testing pause and resume
2021-07-21 13:05:36 [INFO] ok

But 2 hours later my model was still up and I noticed the unit status changing spuriously to blocked, and back to active:

2021-07-21 14:53:06 DEBUG update-status active
2021-07-21 14:53:06 DEBUG update-status active
2021-07-21 14:53:06 INFO juju-log Unit is ready
2021-07-21 14:57:31 WARNING juju-log Package openstack-release has no installation candidate.
2021-07-21 14:57:35 INFO juju-log Registered config file: /etc/ceilometer/ceilometer.conf
2021-07-21 14:57:35 INFO juju-log Registered config file: /etc/ceilometer/polling.yaml
2021-07-21 14:57:35 INFO juju-log Registered config file: /etc/memcached.conf
2021-07-21 14:57:37 WARNING juju-log Package openstack-release has no installation candidate.
2021-07-21 14:57:42 WARNING juju-log Package openstack-release has no installation candidate.
2021-07-21 14:57:46 WARNING juju-log Package openstack-release has no installation candidate.
2021-07-21 14:57:51 WARNING juju-log Package openstack-release has no installation candidate.
2021-07-21 14:57:55 DEBUG juju-log Generating template context for amqp
2021-07-21 14:57:57 WARNING juju-log Package openstack-release has no installation candidate.
2021-07-21 14:57:59 DEBUG update-status active
2021-07-21 14:57:59 DEBUG update-status deactivating
2021-07-21 15:03:20 WARNING juju-log Package openstack-release has no installation candidate.
2021-07-21 15:03:23 INFO juju-log Registered config file: /etc/ceilometer/ceilometer.conf
2021-07-21 15:03:24 INFO juju-log Registered config file: /etc/ceilometer/polling.yaml
2021-07-21 15:03:24 INFO juju-log Registered config file: /etc/memcached.conf
2021-07-21 15:03:25 WARNING juju-log Package openstack-release has no installation candidate.
2021-07-21 15:03:30 WARNING juju-log Package openstack-release has no installation candidate.
2021-07-21 15:03:35 WARNING juju-log Package openstack-release has no installation candidate.
2021-07-21 15:03:39 WARNING juju-log Package openstack-release has no installation candidate.
2021-07-21 15:03:44 DEBUG juju-log Generating template context for amqp
2021-07-21 15:03:46 WARNING juju-log Package openstack-release has no installation candidate.
2021-07-21 15:03:49 DEBUG update-status active
2021-07-21 15:03:49 DEBUG update-status active
2021-07-21 15:03:49 INFO juju-log Unit is ready

Looking at the service logs, it seems like the service keeps dying and restarting every 10 seconds:

Jul 21 15:17:38 juju-b5df3e-zaza-be83d51f921b-14 ceilometer-agent-compute[87733]: Deprecated: Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT".
Jul 21 15:17:43 juju-b5df3e-zaza-be83d51f921b-14 systemd[1]: Stopping Ceilometer Agent Compute...
Jul 21 15:17:43 juju-b5df3e-zaza-be83d51f921b-14 systemd[1]: ceilometer-agent-compute.service: Succeeded.
Jul 21 15:17:43 juju-b5df3e-zaza-be83d51f921b-14 systemd[...

Read more...

Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote :

This is not related to pause/resume. On a fresh focal-ussuri deployment [0] ceilometer-agent-compute.log shows that it's getting a SIGTERM every ~10 seconds:

2021-07-22 11:41:08.495 65340 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
2021-07-22 11:41:08.506 65340 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
2021-07-22 11:41:08.507 65340 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
2021-07-22 11:41:12.537 65302 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
2021-07-22 11:41:12.539 65340 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [65340]
2021-07-22 11:41:18.148 65385 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
2021-07-22 11:41:18.160 65385 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
2021-07-22 11:41:18.160 65385 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
2021-07-22 11:41:21.788 65356 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
2021-07-22 11:41:21.791 65385 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [65385]
2021-07-22 11:41:27.046 65420 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
2021-07-22 11:41:27.057 65420 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
2021-07-22 11:41:27.057 65420 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
2021-07-22 11:41:30.786 65399 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
2021-07-22 11:41:30.788 65420 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [65420]

The real issue here is "No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]." actually. Our charm produces /etc/ceilometer/polling.yaml [1] but the default path for searching for YAML files is now /etc/ceilometer/pollsters.d/ [2].

[0] https://github.com/openstack/charm-ceilometer-agent/blob/master/tests/bundles/focal-ussuri.yaml
[1] https://github.com/openstack/charm-ceilometer-agent/blob/master/hooks/ceilometer_utils.py#L53
[2] https://docs.openstack.org/ceilometer/latest/configuration/index.html

Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote :
summary: - Ceilometer-agent-compute service not running
+ [Ussuri+] Ceilometer-agent-compute service not running
tags: removed: openstack-upgrade
Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote : Re: [Ussuri+] Ceilometer-agent-compute service not running

The Ussuri+ pollster.d/polling.yaml error message mentioned in #6 and #7 appears to be a red herring. I have implemented functional tests [0] which look at which metrics get published by the ceilometer-agent-compute. I have run these tests various times against all releases between Rocky and Wallaby. For all these releases, changing the charm config option `enable-all-pollsters` has an impact on the polling.yaml file, which in turn has an impact on the published metrics. This proves that the ceilometer-agent-compute **can** find this file. So the ceilometer-agent-compute not finding this file can't be the root cause of the original bug.

[0] https://github.com/openstack-charmers/zaza-openstack-tests/pull/615

summary: - [Ussuri+] Ceilometer-agent-compute service not running
+ Ceilometer-agent-compute service not running
Changed in charm-ceilometer-agent:
status: In Progress → Triaged
assignee: Aurelien Lourot (aurelien-lourot) → nobody
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to charm-ceilometer-agent (master)

Related fix proposed to branch: master
Review: https://review.opendev.org/c/openstack/charm-ceilometer-agent/+/803359

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to charm-ceilometer-agent (master)

Reviewed: https://review.opendev.org/c/openstack/charm-ceilometer-agent/+/803359
Committed: https://opendev.org/openstack/charm-ceilometer-agent/commit/300c8a4657ab22a64cc47303765325faa89eb10d
Submitter: "Zuul (22348)"
Branch: master

commit 300c8a4657ab22a64cc47303765325faa89eb10d
Author: Aurelien Lourot <email address hidden>
Date: Thu Jul 22 11:51:17 2021 +0200

    Add CeilometerAgentTest to the gate

    Change-Id: I016792b5642c5d3e7a08fad4fb8151f44fb67dce
    Func-Test-PR: https://github.com/openstack-charmers/zaza-openstack-tests/pull/615
    Related-Bug: #1927277
    Related-Bug: #1938884

Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote : Re: Ceilometer-agent-compute service not running
Download full text (3.3 KiB)

As mentioned in #3 I am able to consistently reproduce this by running our automated OpenStack upgrade tests. [1]

Here is what is happening:

1. We pause the principle unit (nova-compute).
2. We perform an upgrade of all packages on the unit, including ceilometer-agent-compute. When upgrading this package, it wants to restart the service but the service refuses to start because nova-compute.service is masked.
3. We resume the principle unit (nova-compute), which restarts nova-compute.service but ceilometer-agent-compute.service is left behind, stopped.

Working on a solution.

unit-nova-compute-0.log:2021-08-17 11:41:59 DEBUG openstack-upgrade alembic ceilometer-agent-compute ceilometer-common keystone-common
unit-nova-compute-0.log:2021-08-17 11:41:59 DEBUG openstack-upgrade python3-ceilometer python3-cinderclient python3-cliff python3-cryptography
unit-nova-compute-0.log:2021-08-17 11:42:01 DEBUG openstack-upgrade Get:53 http://ubuntu-cloud.archive.canonical.com/ubuntu focal-updates/victoria/main amd64 ceilometer-common all 1:15.0.0-0ubuntu2~cloud0 [27.0 kB]
unit-nova-compute-0.log:2021-08-17 11:42:01 DEBUG openstack-upgrade Get:54 http://ubuntu-cloud.archive.canonical.com/ubuntu focal-updates/victoria/main amd64 ceilometer-agent-compute all 1:15.0.0-0ubuntu2~cloud0 [16.9 kB]
unit-nova-compute-0.log:2021-08-17 11:42:01 DEBUG openstack-upgrade Get:55 http://ubuntu-cloud.archive.canonical.com/ubuntu focal-updates/victoria/main amd64 python3-ceilometer all 1:15.0.0-0ubuntu2~cloud0 [212 kB]
unit-nova-compute-0.log:2021-08-17 11:42:15 DEBUG openstack-upgrade Preparing to unpack .../49-ceilometer-common_1%3a15.0.0-0ubuntu2~cloud0_all.deb ...
unit-nova-compute-0.log:2021-08-17 11:42:15 DEBUG openstack-upgrade Unpacking ceilometer-common (1:15.0.0-0ubuntu2~cloud0) over (1:14.0.0-0ubuntu0.20.04.3) ...
unit-nova-compute-0.log:2021-08-17 11:42:15 DEBUG openstack-upgrade Preparing to unpack .../50-ceilometer-agent-compute_1%3a15.0.0-0ubuntu2~cloud0_all.deb ...
unit-nova-compute-0.log:2021-08-17 11:42:15 DEBUG openstack-upgrade Unpacking ceilometer-agent-compute (1:15.0.0-0ubuntu2~cloud0) over (1:14.0.0-0ubuntu0.20.04.3) ...
unit-nova-compute-0.log:2021-08-17 11:42:15 DEBUG openstack-upgrade Preparing to unpack .../51-python3-ceilometer_1%3a15.0.0-0ubuntu2~cloud0_all.deb ...
unit-nova-compute-0.log:2021-08-17 11:42:15 DEBUG openstack-upgrade Unpacking python3-ceilometer (1:15.0.0-0ubuntu2~cloud0) over (1:14.0.0-0ubuntu0.20.04.3) ...
unit-nova-compute-0.log:2021-08-17 11:42:34 DEBUG openstack-upgrade Setting up ceilometer-common (1:15.0.0-0ubuntu2~cloud0) ...
unit-nova-compute-0.log:2021-08-17 11:42:34 DEBUG openstack-upgrade Configuration file '/etc/ceilometer/ceilometer.conf'
unit-nova-compute-0.log:2021-08-17 11:42:57 DEBUG openstack-upgrade Setting up python3-ceilometer (1:15.0.0-0ubuntu2~cloud0) ...
unit-nova-compute-0.log:2021-08-17 11:43:00 DEBUG openstack-upgrade Setting up ceilometer-agent-compute (1:15.0.0-0ubuntu2~cloud0) ...
unit-nova-compute-0.log:2021-08-17 11:43:00 DEBUG openstack-upgrade Installing new version of config file /etc/init.d/ceilometer-agent-compute ...
unit-nova-compute-0.log:2021-08-17 11:43:02 DEBUG openstack-upgrade F...

Read more...

Changed in charm-ceilometer-agent:
status: Triaged → In Progress
assignee: nobody → Aurelien Lourot (aurelien-lourot)
tags: added: openstack-upgrade
Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote :

This will require a collaboration between the subordinate charm (ceilometer-agent) and the principle charm (nova-compute) as this was done in other charms. [1][2]

[1]: https://review.opendev.org/c/openstack/charm-interface-cinder-backend/+/782761
[2]: https://review.opendev.org/c/openstack/charm-keystone/+/781822

Changed in charm-nova-compute:
status: New → In Progress
importance: Undecided → High
assignee: nobody → Aurelien Lourot (aurelien-lourot)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-nova-compute (master)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-ceilometer-agent (master)
summary: - Ceilometer-agent-compute service not running
+ Ceilometer-agent-compute service not running after OpenStack upgrade
Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote :

I'm now hitting I think the issue of the original poster on a fairly large fresh deployment, with no OpenStack upgrade involved. Created new dedicated lp:1947585 for this. The current lp:1927277 will now focus on solving the OpenStack upgrade path.

Changed in charm-ceilometer-agent:
status: In Progress → Fix Committed
Changed in charm-nova-compute:
status: In Progress → Fix Committed
Changed in charm-ceilometer-agent:
milestone: none → 22.04
Changed in charm-nova-compute:
milestone: none → 22.04
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-ceilometer-agent (master)

Reviewed: https://review.opendev.org/c/openstack/charm-ceilometer-agent/+/812129
Committed: https://opendev.org/openstack/charm-ceilometer-agent/commit/be45f7794504514ffbd4bc721af8f3df9a4c6038
Submitter: "Zuul (22348)"
Branch: master

commit be45f7794504514ffbd4bc721af8f3df9a4c6038
Author: Aurelien Lourot <email address hidden>
Date: Tue Sep 28 14:19:08 2021 +0200

    Publish releases packages map to principal charm

    For principal - subordinate plugin type relations where the
    principal Python payload imports code from packages managed by a
    subordinate, upgrades can be problematic.

    This change will allow a subordinate charm that have opted into the
    feature to inform its principal about all implemented release -
    packages combinations ahead of time. With this information in place
    the principal can do the upgrade in one operation without risk of
    charm relation RPC type processing at a critical moment.

    This is similar to
    https://review.opendev.org/c/openstack/charm-interface-keystone-domain-backend/+/781658
    https://review.opendev.org/c/openstack/charm-layer-openstack/+/781624

    Change-Id: Ibd5bdcb141fc3103ee97123ff284fb2957802eba
    Closes-Bug: #1927277

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-nova-compute (master)

Reviewed: https://review.opendev.org/c/openstack/charm-nova-compute/+/811139
Committed: https://opendev.org/openstack/charm-nova-compute/commit/8fb37dc0c1f76754f9b94863317af03b770516cd
Submitter: "Zuul (22348)"
Branch: master

commit 8fb37dc0c1f76754f9b94863317af03b770516cd
Author: Aurelien Lourot <email address hidden>
Date: Mon Sep 27 15:52:48 2021 +0200

    Process subordinate releases packages map

    For principal - subordinate plugin type relations where the
    principal Python payload imports code from packages managed by a
    subordinate, upgrades can be problematic.

    This change will allow a subordinate charm that have opted into the
    feature to inform its principal about all implemented release -
    packages combinations ahead of time. With this information in place
    the principal can do the upgrade in one operation without risk of
    charm relation RPC type processing at a critical moment.

    This makes use of
    https://github.com/juju/charm-helpers/pull/643

    This is similar to
    https://review.opendev.org/c/openstack/charm-keystone/+/781822

    Also fixed broken link to charm-guide.

    Change-Id: Iaf5b44be70ee108cbe88b4a26f0f15f915d507fe
    Closes-Bug: #1927277

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to charm-nova-compute (master)

Related fix proposed to branch: master
Review: https://review.opendev.org/c/openstack/charm-nova-compute/+/820013

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to charm-nova-compute (master)

Reviewed: https://review.opendev.org/c/openstack/charm-nova-compute/+/820013
Committed: https://opendev.org/openstack/charm-nova-compute/commit/0a4acd6d03551a6bc6a91318093e887c3eca3af9
Submitter: "Zuul (22348)"
Branch: master

commit 0a4acd6d03551a6bc6a91318093e887c3eca3af9
Author: Aurelien Lourot <email address hidden>
Date: Wed Dec 1 11:09:57 2021 +0100

    Fix resume action failure

    Services have interdependencies and the order in which
    we attempt to resume them is important, otherwise the
    resume action may fail.

    Uncovered while and validated by running the
    openstack-upgrade tests. [1]

    [1]: https://github.com/openstack-charmers/charmed-openstack-tester

    Change-Id: I12218b47dc56b502ecc8578c6ab13acbd321bf26
    Related-Bug: #1927277
    Related-Bug: #1952882

Changed in charm-ceilometer-agent:
status: Fix Committed → Fix Released
Changed in charm-nova-compute:
status: Fix Committed → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-nova-compute (stable/xena)

Fix proposed to branch: stable/xena
Review: https://review.opendev.org/c/openstack/charm-nova-compute/+/873282

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-nova-compute (stable/xena)

Reviewed: https://review.opendev.org/c/openstack/charm-nova-compute/+/873282
Committed: https://opendev.org/openstack/charm-nova-compute/commit/acb628e214e1ae2f88c97d036a86a33e4faea46b
Submitter: "Zuul (22348)"
Branch: stable/xena

commit acb628e214e1ae2f88c97d036a86a33e4faea46b
Author: Aurelien Lourot <email address hidden>
Date: Mon Sep 27 15:52:48 2021 +0200

    Process subordinate releases packages map

    For principal - subordinate plugin type relations where the
    principal Python payload imports code from packages managed by a
    subordinate, upgrades can be problematic.

    This change will allow a subordinate charm that have opted into the
    feature to inform its principal about all implemented release -
    packages combinations ahead of time. With this information in place
    the principal can do the upgrade in one operation without risk of
    charm relation RPC type processing at a critical moment.

    This makes use of
    https://github.com/juju/charm-helpers/pull/643

    This is similar to
    https://review.opendev.org/c/openstack/charm-keystone/+/781822

    Also fixed broken link to charm-guide.

    Note, that this includes a ch sync as part of the cherry-pick to pick up
    the charm-helpers functions.

    Change-Id: Iaf5b44be70ee108cbe88b4a26f0f15f915d507fe
    Closes-Bug: #1927277
    (cherry picked from commit 8fb37dc0c1f76754f9b94863317af03b770516cd)

tags: added: in-stable-xena
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-nova-compute (stable/wallaby)

Fix proposed to branch: stable/wallaby
Review: https://review.opendev.org/c/openstack/charm-nova-compute/+/890667

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-nova-compute (stable/victoria)

Fix proposed to branch: stable/victoria
Review: https://review.opendev.org/c/openstack/charm-nova-compute/+/890668

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-nova-compute (stable/ussuri)

Fix proposed to branch: stable/ussuri
Review: https://review.opendev.org/c/openstack/charm-nova-compute/+/890669

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-nova-compute (stable/train)

Fix proposed to branch: stable/train
Review: https://review.opendev.org/c/openstack/charm-nova-compute/+/890670

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-nova-compute (stable/wallaby)

Reviewed: https://review.opendev.org/c/openstack/charm-nova-compute/+/890667
Committed: https://opendev.org/openstack/charm-nova-compute/commit/3a664fc18c873ab621ef15f4e91206afbe361f3c
Submitter: "Zuul (22348)"
Branch: stable/wallaby

commit 3a664fc18c873ab621ef15f4e91206afbe361f3c
Author: Aurelien Lourot <email address hidden>
Date: Mon Sep 27 15:52:48 2021 +0200

    Process subordinate releases packages map

    For principal - subordinate plugin type relations where the
    principal Python payload imports code from packages managed by a
    subordinate, upgrades can be problematic.

    This change will allow a subordinate charm that have opted into the
    feature to inform its principal about all implemented release -
    packages combinations ahead of time. With this information in place
    the principal can do the upgrade in one operation without risk of
    charm relation RPC type processing at a critical moment.

    This makes use of
    https://github.com/juju/charm-helpers/pull/643

    This is similar to
    https://review.opendev.org/c/openstack/charm-keystone/+/781822

    Also fixed broken link to charm-guide.

    Change-Id: Iaf5b44be70ee108cbe88b4a26f0f15f915d507fe
    Closes-Bug: #1927277
    (cherry picked from commit 8fb37dc0c1f76754f9b94863317af03b770516cd)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-nova-compute (stable/victoria)

Reviewed: https://review.opendev.org/c/openstack/charm-nova-compute/+/890668
Committed: https://opendev.org/openstack/charm-nova-compute/commit/282830db93741fe33d21b790ec5dd6a99b1ddd47
Submitter: "Zuul (22348)"
Branch: stable/victoria

commit 282830db93741fe33d21b790ec5dd6a99b1ddd47
Author: Aurelien Lourot <email address hidden>
Date: Mon Sep 27 15:52:48 2021 +0200

    Process subordinate releases packages map

    For principal - subordinate plugin type relations where the
    principal Python payload imports code from packages managed by a
    subordinate, upgrades can be problematic.

    This change will allow a subordinate charm that have opted into the
    feature to inform its principal about all implemented release -
    packages combinations ahead of time. With this information in place
    the principal can do the upgrade in one operation without risk of
    charm relation RPC type processing at a critical moment.

    This makes use of
    https://github.com/juju/charm-helpers/pull/643

    This is similar to
    https://review.opendev.org/c/openstack/charm-keystone/+/781822

    Also fixed broken link to charm-guide.

    Change-Id: Iaf5b44be70ee108cbe88b4a26f0f15f915d507fe
    Closes-Bug: #1927277
    (cherry picked from commit 8fb37dc0c1f76754f9b94863317af03b770516cd)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-nova-compute (stable/ussuri)

Reviewed: https://review.opendev.org/c/openstack/charm-nova-compute/+/890669
Committed: https://opendev.org/openstack/charm-nova-compute/commit/f31b1998fdf63c3a21dbe9503cd7b7510b03929c
Submitter: "Zuul (22348)"
Branch: stable/ussuri

commit f31b1998fdf63c3a21dbe9503cd7b7510b03929c
Author: Aurelien Lourot <email address hidden>
Date: Mon Sep 27 15:52:48 2021 +0200

    Process subordinate releases packages map

    For principal - subordinate plugin type relations where the
    principal Python payload imports code from packages managed by a
    subordinate, upgrades can be problematic.

    This change will allow a subordinate charm that have opted into the
    feature to inform its principal about all implemented release -
    packages combinations ahead of time. With this information in place
    the principal can do the upgrade in one operation without risk of
    charm relation RPC type processing at a critical moment.

    This makes use of
    https://github.com/juju/charm-helpers/pull/643

    This is similar to
    https://review.opendev.org/c/openstack/charm-keystone/+/781822

    Also fixed broken link to charm-guide.

    Change-Id: Iaf5b44be70ee108cbe88b4a26f0f15f915d507fe
    Closes-Bug: #1927277
    (cherry picked from commit 8fb37dc0c1f76754f9b94863317af03b770516cd)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-nova-compute (stable/train)

Reviewed: https://review.opendev.org/c/openstack/charm-nova-compute/+/890670
Committed: https://opendev.org/openstack/charm-nova-compute/commit/8d30677781b3116ede71ca6abc2b05a11fbd8926
Submitter: "Zuul (22348)"
Branch: stable/train

commit 8d30677781b3116ede71ca6abc2b05a11fbd8926
Author: Aurelien Lourot <email address hidden>
Date: Mon Sep 27 15:52:48 2021 +0200

    Process subordinate releases packages map

    For principal - subordinate plugin type relations where the
    principal Python payload imports code from packages managed by a
    subordinate, upgrades can be problematic.

    This change will allow a subordinate charm that have opted into the
    feature to inform its principal about all implemented release -
    packages combinations ahead of time. With this information in place
    the principal can do the upgrade in one operation without risk of
    charm relation RPC type processing at a critical moment.

    This makes use of
    https://github.com/juju/charm-helpers/pull/643

    This is similar to
    https://review.opendev.org/c/openstack/charm-keystone/+/781822

    Also fixed broken link to charm-guide.

    Change-Id: Iaf5b44be70ee108cbe88b4a26f0f15f915d507fe
    Closes-Bug: #1927277
    (cherry picked from commit 8fb37dc0c1f76754f9b94863317af03b770516cd)

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.