jammy: _openssl.abi3.so: undefined symbol: FIPS_mode

Bug #1975491 reported by James Page
16
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Ironic Conductor Charm
Triaged
High
Unassigned
Yoga
In Progress
Undecided
Jadon Naas
Zed
Fix Released
Undecided
Unassigned
OpenStack Keystone SAML Mellon Charm
Triaged
High
Unassigned
Yoga
Triaged
Undecided
Unassigned
Zed
Fix Released
Undecided
Unassigned
charm-octavia-diskimage-retrofit
Fix Released
Undecided
Unassigned
Yoga
Fix Released
Undecided
Unassigned

Bug Description

When deploying on 22.04 and running the retrofit action:

Traceback (most recent call last):
  File "/home/ubuntu/tools/charmed-openstack-tester/.tox/func-target/lib/python3.8/site-packages/zaza/openstack/charm_tests/octavia/diskimage_retrofit/setup.py", line 54, in retrofit_amphora_image
    action = zaza.model.run_action(
  File "/home/ubuntu/tools/charmed-openstack-tester/.tox/func-target/lib/python3.8/site-packages/zaza/__init__.py", line 108, in _wrapper
    return run(_run_it())
  File "/home/ubuntu/tools/charmed-openstack-tester/.tox/func-target/lib/python3.8/site-packages/zaza/__init__.py", line 93, in run
    return task.result()
  File "/home/ubuntu/tools/charmed-openstack-tester/.tox/func-target/lib/python3.8/site-packages/zaza/__init__.py", line 107, in _run_it
    return await f(*args, **kwargs)
  File "/home/ubuntu/tools/charmed-openstack-tester/.tox/func-target/lib/python3.8/site-packages/zaza/model.py", line 987, in async_run_action
    raise ActionFailed(action_obj, output=output)
zaza.model.ActionFailed: Run of action "retrofit-image" with parameters "{'force': False, 'source-image': ''}" on "octavia-diskimage-retrofit/0" failed with "/var/lib/juju/agents/unit-octavia-diskimage-retrofit-0/.venv/lib/python3.10/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so: undefined symbol: FIPS_mode" (id=62 status=failed enqueued=2022-05-23T13:41:40Z started=2022-05-23T13:41:40Z completed=2022-05-23T13:41:43Z output={'Code': '0'})

Revision history for this message
James Page (james-page) wrote :

This might be related to the fact that the charm is built on 20.04 but then runs on 22.04.

Revision history for this message
Robert Gildein (rgildein) wrote (last edit ):

I ran into the same problem here, I tried to build on 22.04 (current
master branch), but charmcraft 1.5/stable does not support built
on 22.04 and the current charmcraft.yaml file does not work for
a newer version of charmcraft.

This is a snippet from my bundle where I use jammy series.
```yaml
octavia-diskimage-retrofit:
    charm: ch:octavia-diskimage-retrofit
    channel: yoga/edge
    options:
      amp-image-tag: 'octavia-amphora'
      retrofit-series: jammy
```

Revision history for this message
Frode Nordahl (fnordahl) wrote :

This issue needs further fixing in order for the charm to work on Jammy. At present the charm pins cryptography<3.4 in order to avoid the Rust build time dependency.

Actually the package in Ubuntu does something similar but also contains patches to allow cryptography 3.4.x to work with OpenSSL 3.x [0].

In a charms wheelhouse we do not have the luxury or desire to carry patches for Python packages, so I think what we need to do is unpin cryptography and just add the required build dependencies.

Given reactive charms are already compiling C source dependencies avoiding dependencies written in other languages feel artificial to me.

We of course want to stop building source packages on install altogether, but that is a longer canvas to bleach.

0: https://git.launchpad.net/ubuntu/+source/python-cryptography/tree/debian/patches/openssl3?h=ubuntu/jammy

Revision history for this message
Alex Kavanagh (ajkavanagh) wrote :

Additionally, at least charmcraft 1.7 is needed for a build on 22.04.

Changed in charm-octavia-diskimage-retrofit:
status: New → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-octavia-diskimage-retrofit (master)

Reviewed: https://review.opendev.org/c/openstack/charm-octavia-diskimage-retrofit/+/852747
Committed: https://opendev.org/openstack/charm-octavia-diskimage-retrofit/commit/2fb0cf125028b3e164741afcdbff466038de134a
Submitter: "Zuul (22348)"
Branch: master

commit 2fb0cf125028b3e164741afcdbff466038de134a
Author: Frode Nordahl <email address hidden>
Date: Wed Aug 10 16:44:55 2022 +0200

    Build separately for each supported series and use binary builds

    Charms for OpenStack Yoga supports both Ubuntu Focal and Jammy
    which means Python 3.8 and Python 3.10. Managing dependencies
    across those two versions is non-trivial and we need to build
    the charm on the series the charm is supposed to support.

    Switch to using a binary build which allows pip's dependency
    resolution to work.

    Update charm to consume the 1.0/stable track for the snap.

    Bundles:
    - Drop the renaming of the charm artifact and reference the
      series specific charm artifact in the bundles instead.
    - Fix gss mirror list, jammy retrofit-series and use 8.0/stable
      for MySQL.

    Closes-Bug: #1981334
    Closes-Bug: #1970653
    Closes-Bug: #1975491
    Change-Id: I8c924038ee1c5ff258c41a44ad14ebc86a107b1b

Changed in charm-octavia-diskimage-retrofit:
status: In Progress → Fix Committed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-octavia-diskimage-retrofit (stable/yoga)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-octavia-diskimage-retrofit (stable/yoga)

Reviewed: https://review.opendev.org/c/openstack/charm-octavia-diskimage-retrofit/+/860979
Committed: https://opendev.org/openstack/charm-octavia-diskimage-retrofit/commit/5fd698d29757df7603691078fff80cb38edb9cb6
Submitter: "Zuul (22348)"
Branch: stable/yoga

commit 5fd698d29757df7603691078fff80cb38edb9cb6
Author: Frode Nordahl <email address hidden>
Date: Wed Aug 10 16:44:55 2022 +0200

    Build separately for each supported series and use binary builds

    Charms for OpenStack Yoga supports both Ubuntu Focal and Jammy
    which means Python 3.8 and Python 3.10. Managing dependencies
    across those two versions is non-trivial and we need to build
    the charm on the series the charm is supposed to support.

    Switch to using a binary build which allows pip's dependency
    resolution to work.

    Update charm to consume the 1.0/stable track for the snap.

    Bundles:
    - Drop the renaming of the charm artifact and reference the
      series specific charm artifact in the bundles instead.
    - Fix gss mirror list, jammy retrofit-series and use 8.0/stable
      for MySQL.

    Closes-Bug: #1981334
    Closes-Bug: #1970653
    Closes-Bug: #1975491
    Change-Id: I8c924038ee1c5ff258c41a44ad14ebc86a107b1b
    (cherry picked from commit 2fb0cf125028b3e164741afcdbff466038de134a)

tags: added: in-stable-yoga
Revision history for this message
Przemyslaw Hausman (phausman) wrote :

I'm hitting the same issue with ironic-conductor charm on jammy. Here's the result of running juju action `set-temp-url-secret`:

```
$ juju run-action ironic-conductor/0 set-temp-url-secret --wait
unit-ironic-conductor-0:
  UnitId: ironic-conductor/0
  id: "40"
  message: '/var/lib/juju/agents/unit-ironic-conductor-0/.venv/lib/python3.10/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so:
    undefined symbol: FIPS_mode'
  results: {}
  status: failed
  timing:
    completed: 2022-12-01 15:23:43 +0000 UTC
    enqueued: 2022-12-01 15:23:41 +0000 UTC
    started: 2022-12-01 15:23:41 +0000 UTC
```

Revision history for this message
Corey Bryant (corey.bryant) wrote :

This appears to affect charm-ironic-conductor for releases prior to zed.

Changed in charm-ironic-conductor:
status: New → Triaged
importance: Undecided → High
Revision history for this message
Felipe Reyes (freyes) wrote :

we might need to convert ironic-conductor to a "binary charm".

Revision history for this message
Felipe Reyes (freyes) wrote :

> I'm hitting the same issue with ironic-conductor charm on jammy.

is this jammy-yoga?

Revision history for this message
Felipe Reyes (freyes) wrote :

Looking at this recent CI run[0] of jammy-yoga, allegedly the set_temp_url_secret() setup function ran successfully:

2023-02-18 03:32:42.452818 | focal-medium | Configure zaza.openstack.charm_tests.ironic.setup.set_temp_url_secret:
2023-02-18 03:32:42.452826 | focal-medium | Start: 1676689848.988125
2023-02-18 03:32:42.452834 | focal-medium | Finish: 1676689852.602591
2023-02-18 03:32:42.452842 | focal-medium | Elapsed Time: 3.6144659519195557
2023-02-18 03:32:42.452850 | focal-medium | PCT Of Run Time: 1

Although the juju status suggests something different:

ironic-conductor/0* blocked idle 12 172.16.0.163 invalid enabled-deploy-interfaces config, run set-temp-url-secret action on leader to enable direct deploy method

Since the deployment succeeded there is no juju-crashdump available to inspection, this will need to be reproduced locally (the false positive of the action execution and the action failure reported in comment #9 )

[0] https://review.opendev.org/c/openstack/charm-ironic-conductor/+/872438/1#message-e0a1db58a3c6f20205d10ea32eca229b1e91b765

Revision history for this message
Przemyslaw Hausman (phausman) wrote :
Download full text (8.0 KiB)

I'm hitting the same issue installing keystone-saml-mellon from yoga/stable on jammy. Subscribing field-critical because it is blocking customer deployment.

```
$ juju debug-log -i keystone-saml-mellon/4
unit-keystone-saml-mellon-4: 22:58:30 INFO juju.worker.migrationminion migration phase is now: NONE
unit-keystone-saml-mellon-4: 22:58:30 INFO juju.worker.logger logger worker started
unit-keystone-saml-mellon-4: 22:58:30 INFO juju.worker.upgrader no waiter, upgrader is done
unit-keystone-saml-mellon-4: 22:58:30 ERROR juju.worker.meterstatus error running "meter-status-changed": charm missing from disk
unit-keystone-saml-mellon-4: 22:58:30 INFO juju.worker.uniter unit "keystone-saml-mellon/4" started
unit-keystone-saml-mellon-4: 22:58:30 INFO juju.worker.uniter resuming charm install
unit-keystone-saml-mellon-4: 22:58:30 INFO juju.worker.uniter.charm downloading ch:amd64/jammy/keystone-saml-mellon-31 from API server
unit-keystone-saml-mellon-4: 22:58:31 INFO juju.worker.uniter hooks are retried true
unit-keystone-saml-mellon-4: 22:58:31 INFO juju.worker.uniter.storage initial storage attachments ready
unit-keystone-saml-mellon-4: 22:58:31 INFO juju.worker.uniter found queued "install" hook
unit-keystone-saml-mellon-4: 22:58:30 INFO juju Starting unit workers for "keystone-saml-mellon/4"
unit-keystone-saml-mellon-4: 22:58:30 INFO juju.worker.apicaller [79829f] "unit-keystone-saml-mellon-4" successfully connected to "10.77.0.53:17070"
unit-keystone-saml-mellon-4: 22:58:30 INFO juju.worker.apicaller [79829f] password changed for "unit-keystone-saml-mellon-4"
unit-keystone-saml-mellon-4: 22:58:30 INFO juju.worker.apicaller [79829f] "unit-keystone-saml-mellon-4" successfully connected to "10.77.0.53:17070"
unit-keystone-saml-mellon-4: 23:03:20 INFO unit.keystone-saml-mellon/4.juju-log Reactive main running for hook install
unit-keystone-saml-mellon-4: 23:03:20 INFO unit.keystone-saml-mellon/4.juju-log Initializing Leadership Layer (is leader)
unit-keystone-saml-mellon-4: 23:03:20 INFO unit.keystone-saml-mellon/4.juju-log Invoking reactive handler: reactive/layer_openstack.py:15:default_install
unit-keystone-saml-mellon-4: 23:03:20 INFO unit.keystone-saml-mellon/4.juju-log Installing ['libapache2-mod-auth-mellon'] with options: ['--option=Dpkg::Options::=--force-confold']
unit-keystone-saml-mellon-4: 23:03:29 INFO unit.keystone-saml-mellon/4.juju-log Invoking reactive handler: reactive/keystone_saml_mellon_handlers.py:29:keystone_departed
unit-keystone-saml-mellon-4: 23:03:29 INFO unit.keystone-saml-mellon/4.juju-log Invoking reactive handler: reactive/keystone_saml_mellon_handlers.py:69:assess_status
unit-keystone-saml-mellon-4: 23:03:29 INFO unit.keystone-saml-mellon/4.juju-log Invoking reactive handler: hooks/relations/tls-certificates/requires.py:109:broken:certificates
unit-keystone-saml-mellon-4: 23:03:29 INFO unit.keystone-saml-mellon/4.juju-log Invoking reactive handler: hooks/relations/keystone-fid-service-provider/provides.py:66:departed:keystone-fid-service-provider
unit-keystone-saml-mellon-4: 23:03:29 INFO unit.keystone-saml-mellon/4.juju-log Invoking reactive handler: hooks/relations/juju-info/requires.py:24:broken:container
unit-key...

Read more...

Revision history for this message
Przemyslaw Hausman (phausman) wrote :

field-critical is for both ironic-conductor and keystone-saml-mellon. We have both of these applications in a customer deployment at the moment.

Revision history for this message
Corey Bryant (corey.bryant) wrote (last edit ):

It looks like charm-ironic-conductor and charm-keystone-saml-mellon both build with --binary-wheels in their stable/zed branches so this might just need one more cherry-pick to stable/yoga for now.

Revision history for this message
Corey Bryant (corey.bryant) wrote :

As a work-around I think you should be able to use zed charm tracks with openstack-origin set to yoga.

Changed in charm-keystone-saml-mellon:
status: New → Triaged
importance: Undecided → High
Revision history for this message
Przemyslaw Hausman (phausman) wrote :

Corey, thank you for looking into it!

I have configured `channel: zed/stable` and left openstack-origin unchanged (it defaults to `distro` and I'm deploying on jammy, so it should mean yoga) for both keystone-saml-mellon and ironic-conductor.

For keystone-saml-mellon it did the job. I was able to successfully deploy the charm.

But the workaround does not work for ironic-conductor. When I run `set-temp-url-secret` action, it fails, and I get the following in the debug-log:

```
$ juju debug-log -i ironic-conductor/0
[...]
unit-ironic-conductor-0: 23:37:02 ERROR unit.ironic-conductor/0.juju-log action "set-temp-url-secret" failed: "local variable 'keystone_session' referenced before assignment" "Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-ironic-conductor-0/charm/actions/set-temp-url-secret", line 112, in main
    action(args)
  File "/var/lib/juju/agents/unit-ironic-conductor-0/charm/actions/set-temp-url-secret", line 60, in set_temp_url_secret
    os_cli = api_utils.OSClients(keystone_session)
UnboundLocalError: local variable 'keystone_session' referenced before assignment
```

Revision history for this message
Corey Bryant (corey.bryant) wrote :

@phausman, Can you share the show-action-output from the failed action? There should be a log with "Failed to create keystone session ... " that may have more information.

Revision history for this message
Natalia Litvinova (natalytvinova) wrote :

@corey.bryant unfortunately it doesn't show more that this:

```
$ juju show-action-output 194 -m openstack
UnitId: ironic-conductor/2
id: "194"
message: local variable 'keystone_session' referenced before assignment
results: {}
status: failed
timing:
  completed: 2023-04-28 13:50:03 +0000 UTC
  enqueued: 2023-04-28 13:50:02 +0000 UTC
  started: 2023-04-28 13:50:02 +0000 UTC
```

Revision history for this message
Corey Bryant (corey.bryant) wrote :

In a debug session with Natalia we were able to find a little more details:

Failed to create keystone session ("'PathDistribution' object has no attribute '_normalized_name'")

https://github.com/pypa/setuptools/issues/3452 which says to update to importlib_metadata 4.3 or later.

The version of importlib_metadata in the wheelhouse.txt is 2.1.3.

Revision history for this message
Corey Bryant (corey.bryant) wrote :

I'm moving this new issue reported in comments #18-#21 to a new bug at: https://bugs.launchpad.net/charm-ironic-conductor/+bug/2018018

Changed in charm-octavia-diskimage-retrofit:
status: Fix Committed → Fix Released
Revision history for this message
Jadon Naas (jadonn) wrote :

I've tried cherry-picking the commit from stable/zed to build with binary wheels as Corey mentioned previously as the fix. It's in Gerrit at https://review.opendev.org/c/openstack/charm-ironic-conductor/+/887116. Tests pass, but it will need review, of course.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.