lxd

Jammy machines can't connect to proxy

Bug #1991552 reported by Bas de Bruijne
28
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Incomplete
High
Unassigned
2.9
Incomplete
High
Unassigned
3.0
Incomplete
High
Unassigned
lxd
New
Undecided
Unassigned

Bug Description

In test run https://solutions.qa.canonical.com/testruns/testRun/6d96629c-3db5-442f-a603-b7f0e08b1d1b, openstack-dashboard/0 fails to install with the following messages in the log:

```
unit-openstack-dashboard-0: 22:42:59 TRACE juju.worker.uniter.relation no create relation operation to run
unit-openstack-dashboard-0: 22:42:59 DEBUG juju.worker.uniter.operation running operation run install hook for openstack-dashboard/0
unit-openstack-dashboard-0: 22:42:59 DEBUG juju.worker.uniter.operation preparing operation "run install hook" for openstack-dashboard/0
unit-openstack-dashboard-0: 22:42:59 DEBUG juju.worker.uniter.operation executing operation "run install hook" for openstack-dashboard/0
unit-openstack-dashboard-0: 22:43:06 WARNING unit.openstack-dashboard/0.install E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/i/ieee-data/ieee-data_20210605.1_all.deb Temporary failure resolving 'squid.internal'
unit-openstack-dashboard-0: 22:43:06 WARNING unit.openstack-dashboard/0.install E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/p/python-netaddr/python3-netaddr_0.8.0-2_all.deb Temporary failure resolving 'squid.internal'
unit-openstack-dashboard-0: 22:43:06 WARNING unit.openstack-dashboard/0.install E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
unit-openstack-dashboard-0: 22:43:06 ERROR juju.worker.uniter.operation hook "install" (via explicit, bespoke hook script) failed: exit status 100
unit-openstack-dashboard-0: 22:43:06 DEBUG juju.worker.uniter.operation lock released for openstack-dashboard/0
unit-openstack-dashboard-0: 22:43:06 TRACE juju.worker.uniter.relation create relation resolver next op for new remote relations map[int]remotestate.RelationSnapshot{
```

It could be a squid outage, but since we have started testing OpenStack on Jammy we have been seeing this error a lot on many different charms. It seems that the proxy connection is not stable on jammy. It is possible that this is a bug in lxd or OpenStack instead of juju.

Crashdumps and logs for this run can be found here:
https://oil-jenkins.canonical.com/artifacts/6d96629c-3db5-442f-a603-b7f0e08b1d1b/index.html

Tags: cdo-qa
Revision history for this message
John A Meinel (jameinel) wrote : Re: [Bug 1991552] [NEW] Jammy machines can't connect to proxy

 Temporary failure resolving 'squid.internal'.

Is there something that needs to be tweaked inside the instances to be able
to resolve '*.internal' addresses? They certainly are generally available.

On Mon, Oct 3, 2022 at 11:20 AM Bas de Bruijne <email address hidden>
wrote:

> Public bug reported:
>
> In test run
>
> https://solutions.qa.canonical.com/testruns/testRun/6d96629c-3db5-442f-a603-b7f0e08b1d1b
> ,
> openstack-dashboard/0 fails to install with the following messages in
> the log:
>
> ```
> unit-openstack-dashboard-0: 22:42:59 TRACE juju.worker.uniter.relation no
> create relation operation to run
> unit-openstack-dashboard-0: 22:42:59 DEBUG juju.worker.uniter.operation
> running operation run install hook for openstack-dashboard/0
> unit-openstack-dashboard-0: 22:42:59 DEBUG juju.worker.uniter.operation
> preparing operation "run install hook" for openstack-dashboard/0
> unit-openstack-dashboard-0: 22:42:59 DEBUG juju.worker.uniter.operation
> executing operation "run install hook" for openstack-dashboard/0
> unit-openstack-dashboard-0: 22:43:06 WARNING
> unit.openstack-dashboard/0.install E: Failed to fetch
> http://archive.ubuntu.com/ubuntu/pool/main/i/ieee-data/ieee-data_20210605.1_all.deb
> Temporary failure resolving 'squid.internal'
> unit-openstack-dashboard-0: 22:43:06 WARNING
> unit.openstack-dashboard/0.install E: Failed to fetch
> http://archive.ubuntu.com/ubuntu/pool/main/p/python-netaddr/python3-netaddr_0.8.0-2_all.deb
> Temporary failure resolving 'squid.internal'
> unit-openstack-dashboard-0: 22:43:06 WARNING
> unit.openstack-dashboard/0.install E: Unable to fetch some archives, maybe
> run apt-get update or try with --fix-missing?
> unit-openstack-dashboard-0: 22:43:06 ERROR juju.worker.uniter.operation
> hook "install" (via explicit, bespoke hook script) failed: exit status 100
> unit-openstack-dashboard-0: 22:43:06 DEBUG juju.worker.uniter.operation
> lock released for openstack-dashboard/0
> unit-openstack-dashboard-0: 22:43:06 TRACE juju.worker.uniter.relation
> create relation resolver next op for new remote relations
> map[int]remotestate.RelationSnapshot{
> ```
>
> It could be a squid outage, but since we have started testing OpenStack
> on Jammy we have been seeing this error a lot on many different charms.
> It seems that the proxy connection is not stable on jammy. It is
> possible that this is a bug in lxd or OpenStack instead of juju.
>
> Crashdumps and logs for this run can be found here:
>
> https://oil-jenkins.canonical.com/artifacts/6d96629c-3db5-442f-a603-b7f0e08b1d1b/index.html
>
> ** Affects: juju
> Importance: Undecided
> Status: New
>
> --
> You received this bug notification because you are subscribed to juju.
> Matching subscriptions: juju bugs
> https://bugs.launchpad.net/bugs/1991552
>
> Title:
> Jammy machines can't connect to proxy
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/juju/+bug/1991552/+subscriptions
>
>

Revision history for this message
Bas de Bruijne (basdbruijne) wrote :

Do you have an idea of what needs to be tweaked?

It is consistently only one out of 3 units that runs into this error, so I don't think this is due to a misconfiguration.

This is not a problem we have seen in k8s deployments on jammy, and it is not necessarily reproducible with an OpenStack deployment. My guess is that the network is somehow more overloaded on jammy.

Revision history for this message
Marian Gasparovic (marosg) wrote :

15+ test runs, it happened on each of them on at least one unit, usually on not more than 3-4 units.
Also, it is not happening on a silo with more resources, my guess is a race condition.
As it happens every time, we can reproduce it, but only in our environment, if somebody is will to have a look to live env, we can arrange that.

Revision history for this message
Marian Gasparovic (marosg) wrote :

We are still hitting this during juju 2.9.37 testing which prevents us from deploying jammy

After talking to manadart I saw lxds which fail are missing one IP. He suggested it is similar to
https://bugs.launchpad.net/juju/+bug/1993137 and https://bugs.launchpad.net/juju/+bug/1994488 which are fixed in 2.9.37.
But we are still hitting this

Revision history for this message
Bas de Bruijne (basdbruijne) wrote :

I'm also seeing machines missing IPs, for example:

```
etcd/0* active idle 0/lxd/3 10.246.165.217 2379/tcp Healthy with 2 known peers
  filebeat/10 active idle 10.246.165.217 Filebeat ready.
  landscape-client/12 maintenance idle 10.246.165.217 Need computer-title and juju-info to proceed
  logrotated/5 active idle 10.246.165.217 Unit is ready.
  nrpe/17 active idle 10.246.165.217 icmp,5666/tcp Ready
  prometheus-grok-exporter/11 active idle 10.246.165.217 9144/tcp Unit is ready
  telegraf/11 active idle 10.246.165.217 9103/tcp Monitoring etcd/0 (source version/commit d208a64)
etcd/1 error idle 1/lxd/3 10.246.166.183 hook failed: "install"
etcd/2 active idle 2/lxd/4 10.246.164.117 2379/tcp Healthy with 2 known peers
  filebeat/59 active idle 10.246.164.117 Filebeat ready.
  landscape-client/58 maintenance idle 10.246.164.117 Need computer-title and juju-info to proceed
  logrotated/53 active idle 10.246.164.117 Unit is ready.
  nrpe/68 active idle 10.246.164.117 icmp,5666/tcp Ready
  prometheus-grok-exporter/58 active idle 10.246.164.117 9144/tcp Unit is ready
  telegraf/59 active idle 10.246.164.117 9103/tcp Monitoring etcd/2 (source version/commit d208a64)
```

0/lxd/3 and 1/lxd/3 should have addresses on the same spaces, but looking at the lxc list outputs:
```
+----------------------+---------+-----------------------+------+-----------+-----------+
| juju-76c074-0-lxd-3 | RUNNING | 10.246.170.1 (eth0) | | CONTAINER | 0 |
| | | 10.246.165.217 (eth1) | | | |
+----------------------+---------+-----------------------+------+-----------+-----------+
```
and
```
+----------------------+---------+-----------------------+------+-----------+-----------+
| juju-76c074-1-lxd-3 | RUNNING | 10.246.166.183 (eth1) | | CONTAINER | 0 |
+----------------------+---------+-----------------------+------+-----------+-----------+
```

This is probably related to LP: #1956981 except that the addresses are missing on different spaces.

Revision history for this message
Trent Lloyd (lathiat) wrote :
Download full text (5.6 KiB)

(Not an LXD expert, I'm from Sustaining Engineering, but reading the thread got me a little curious about how things work so some details from a small adventure I went trying to look at it)

Working from https://oil-jenkins.canonical.com/artifacts/6d96629c-3db5-442f-a603-b7f0e08b1d1b/generated/generated/openstack/juju-crashdump-openstack-2022-09-30-22.52.15.tar.gz - in this case the broken openstack-dashboard/0 was machine 3/lxd/9

The weird MAC address is key. 00:16:3e is the range LXD auto-generates it's MACs from, other MACs such as the 3e:33:82:f5:f7:8a you mentioned and the 9e:d6:cb:1d:3e:7a we see in this dump is from the 'locally administered' range has the 2 LSB set - any address matching x[26AE]:xx:xx:xx:xx:xx. This range is used by most other things that auto-generate an address including the kernel and systemd. For example when you first create a veth pair (which is how an LXD is connected to a bridge), the kernel assigns it a MAC from this range.

We arrive here at a container where eth0 has the wrong MAC. We can see the "ip link" output in 3/lxd/9/var/log/syslog

We get:
```
43: eth0@if44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9e:d6:cb:1d:3e:7a brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::9cd6:cbff:fe1d:3e7a/64 scope link
45: eth1@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:d9:7d:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.246.173.5/22 brd 10.246.175.255 scope global eth1
47: eth2@if48: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:8e:ea:b8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.246.166.209/22 brd 10.246.167.255 scope global eth2
```

What we expected can be found in 3/lxd/9/etc/netplan/99-juju.yaml

```
eth0 00:16:3e:dd:d2:fd 10.246.169.7/22
eth1 00:16:3e:d9:7d:9a 10.246.173.5/22
eth2 00:16:3e:8e:ea:b8 10.246.166.209/22
```

This incorrect MAC address explains why the IP is not set. Because the juju netplan configuration uses a "match: macaddress" segment. Although the interface has the correct name, if the MAC address doesn't match then netplan/systemd-networkd won't add the IP address.

So the question is how we arrive at an eth0 with the wrong MAC address.

Looking at the LXD bridged network setup code here: https://github.com/lxc/lxd/blob/09a226043e705369973596440405aa94203a00cf/lxd/device/device_utils_network.go#L229

It creates a veth pair, and then after creation, sets the MAC address. So it's possible we got stuck with the default kernel generated one and that LXD failed to apply it's generated MAC for some reason.

While I haven’t done an exhaustive understanding/search, the code paths here generally seem to fail and teardown the interface if setting the MAC fails and my very rough guesstimation of which veth interface it was (there is no way from juju-crash dump to know which interface is which for sure, but I made a good guessed based on a bunch of timing/etc) is that the interface did come up and get added to the bridge (which all happens later after the MAC is set). So it seems...

Read more...

tags: added: cdo-qa
Revision history for this message
Marian Gasparovic (marosg) wrote :

Confirmed to be happening also with LXD 4/stable on jammy.

Revision history for this message
Ante Karamatić (ivoks) wrote :

FWIW, container config (lxc config show) for misbehaving containers looks correct. But container 'boots up' with wrong MAC.

Revision history for this message
Nick Rosbrook (enr0n) wrote :

To further investigate systemd's involvement here, please start by enabling debug logging on systemd-udevd:

$ cat /etc/systemd/system/systemd-udevd.service.d/override.conf
[Service]
Environment=SYSTEMD_LOG_LEVEL=debug
LogRateLimitIntervalSec=0
$ systemctl daemon-reload # Or, reboot

Looking for messages matching "MAC address" should show most relevant udev log messages:

$ journalctl -u systemd-udevd -b 0 --grep "MAC address"

If this is a race between udev and lxd, you should be able to work around it from the udev side by overriding the default MACAddressPolicy[1]:

$ cat /etc/systemd/network/99-default.link.d/mac-address-policy-none.conf
[Link]
MACAddressPolicy=none

If that override works as intended, then systemd should never try to set the MAC address for an interface (unless there are some other configs in place on the system that take precedence over 99-default.link).

[1] https://www.freedesktop.org/software/systemd/man/systemd.link.html#MACAddressPolicy=

Revision history for this message
Thomas Parrott (tomparrott) wrote :

Lets see if this helps by working around an external entity racing LXD setting the MAC address via ip link, by giving liblxc a 2nd chance to set the MAC address just before the NIC is passed to the container.

https://github.com/lxc/lxd/pull/11144

Revision history for this message
Marian Gasparovic (marosg) wrote :

@enr0n I don't see any MAC messages but MAC is still wrong

I am setting override.conf via cloud-init which may be too late in the game?

ubuntu@juju-4ccb23-4-lxd-1:~$ journalctl -u systemd-udevd -b 0
Nov 23 15:41:54 juju-4ccb23-4-lxd-1 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 23 15:41:54 juju-4ccb23-4-lxd-1 systemd-udevd[107]: Failed to chown '/dev/net/tun' 0 0: Operation not permitted
Nov 23 15:41:54 juju-4ccb23-4-lxd-1 systemd-udevd[107]: Failed to apply permissions on static device nodes: Operation not permitted
Nov 23 15:41:54 juju-4ccb23-4-lxd-1 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 23 15:41:54 juju-4ccb23-4-lxd-1 systemd-udevd[156]: Using default interface naming scheme 'v249'.
Nov 23 15:41:54 juju-4ccb23-4-lxd-1 systemd-udevd[158]: Using default interface naming scheme 'v249'.
Nov 23 15:41:54 juju-4ccb23-4-lxd-1 systemd-udevd[157]: Using default interface naming scheme 'v249'.
Nov 23 15:42:00 juju-4ccb23-4-lxd-1 systemd-udevd[363]: Using default interface naming scheme 'v249'.
Nov 23 15:42:00 juju-4ccb23-4-lxd-1 systemd-udevd[364]: Using default interface naming scheme 'v249'.
Nov 23 15:42:00 juju-4ccb23-4-lxd-1 systemd-udevd[365]: Using default interface naming scheme 'v249'.
ubuntu@juju-4ccb23-4-lxd-1:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
55: eth0@if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 06:c0:97:31:d3:1e brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::4c0:97ff:fe31:d31e/64 scope link
       valid_lft forever preferred_lft forever
57: eth1@if58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:3c:63:2c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.246.172.109/22 brd 10.246.175.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe3c:632c/64 scope link
       valid_lft forever preferred_lft forever
59: eth2@if60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:06:12:20 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.246.167.7/22 brd 10.246.167.255 scope global eth2
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe06:1220/64 scope link
       valid_lft forever preferred_lft forever
ubuntu@juju-4ccb23-4-lxd-1:~$ cat /etc/systemd/system/systemd-udevd.service.d/override.conf
[Service]
Environment=SYSTEMD_LOG_LEVEL=debug
LogRateLimitIntervalSec=0

Revision history for this message
Marian Gasparovic (marosg) wrote :

We had 10+ test runs with https://github.com/lxc/lxd/pull/11144 and we did not encounter wrong MAC at all.

Revision history for this message
Thomas Parrott (tomparrott) wrote :

Glad to hear that https://github.com/lxc/lxd/pull/11144 appears to have worked around the issue.

Revision history for this message
Joseph Phillips (manadart) wrote :

Marking incomplete for Juju; it will subside for that project without further action.

Many thanks, Trent Lloyd, for the comprehensive delving. Hit a Juju team member up for a beer at the next sprint.

Changed in juju:
status: New → Incomplete
Revision history for this message
Marian Gasparovic (marosg) wrote :

@tomparrott

We are hitting this bug again, much less than before https://github.com/lxc/lxd/pull/11144 but it is happening again.

One of failed runs
https://oil-jenkins.canonical.com/artifacts/9ce824e3-1cc4-4fba-8a6d-48dcabfa5e54/index.html

ceph-mon/0 failed installing becvasue eth0 had no IP. cloudinit shows weird MAC address

ci-info: ++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++
ci-info: +--------+------+------------------------------+-----------+-------+-------------------+
ci-info: | Device | Up | Address | Mask | Scope | Hw-Address |
ci-info: +--------+------+------------------------------+-----------+-------+-------------------+
ci-info: | eth0 | True | fe80::b855:b4ff:fece:8ec9/64 | . | link | ba:55:b4:ce:8e:c9 |
ci-info: | eth1 | True | fe80::216:3eff:fe19:e2ca/64 | . | link | 00:16:3e:19:e2:ca |
ci-info: | eth2 | True | fe80::216:3eff:febf:f697/64 | . | link | 00:16:3e:bf:f6:97 |
ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . |
ci-info: | lo | True | ::1/128 | . | host | . |
ci-info: +--------+------+------------------------------+-----------+-------+-------------------+

Revision history for this message
Marian Gasparovic (marosg) wrote :

Can we have another look from Juju side? Tom says "We've effectively pushed the MAC setting down as far as we can (AFAIK) so it seems like there are limited options"

Changed in juju:
status: Incomplete → New
Revision history for this message
Jeffrey Chang (modern911) wrote :

Bug occurrence rate in SQA lab this week
 6 / 27 in fkb-master-kubernetes-jammy-baremetal-ovn
 2 / 5 in fkb-master-kubernetes-jammy-baremetal (since Feb 9)
not so much in other skus

However, there's another lxd/juju relate bug LP#1956981
 5 / 27 in fcb-master-yoga-jammy
However, we saw 1 occurrence on focal this week.

Changed in juju:
assignee: nobody → Joseph Phillips (manadart)
importance: Undecided → Medium
status: New → Triaged
Revision history for this message
Joseph Phillips (manadart) wrote :

The only option that I can see for Juju is to remove the "match" stanza from the Netplan configuration that we render. It's not strictly necessary anyway.

This should ensure that devices get an IP address, but that doesn't do anything about the hardware address that gets assigned. Juju can't control that - we supply the config and start the container; nothing else.

Stand by for the patch.

Changed in juju:
status: Triaged → In Progress
importance: Medium → High
milestone: none → 3.1.1
Revision history for this message
Marian Gasparovic (marosg) wrote :

I have got a hint from Lukas, this could be related

https://github.com/canonical/netplan/pull/278

Revision history for this message
Joseph Phillips (manadart) wrote :

I'm not sure it is related. The matching works fine if the device has the MAC that we configured and generated the Netplan config for.

In any case, here's the patch for Juju:
https://github.com/juju/juju/pull/15194

Changed in juju:
status: In Progress → Fix Committed
Revision history for this message
Joseph Phillips (manadart) wrote :

Reopened. Reverting the prior patch, which breaks KVM.

Changed in juju:
status: Fix Committed → Triaged
Revision history for this message
Andy Wu (qch2012) wrote :

hit this issue multiple times in recent yoga/jammy deployment for PS6 , every redeployment will always have 2-3 units pending to start due to LXD unit eth0 interface has wrong MAC address, therefore not able to get IP.

subscribe this bug to field-high

Revision history for this message
Joseph Phillips (manadart) wrote :
Revision history for this message
Thomas Parrott (tomparrott) wrote :

I've been looking into this today and have come across some relevant links which I suspect explain the issue.

I suspect it is the change to systemd-networkd that adds a default link policy of:

`MACAddressPolicy=persistent`

Which would apply to new veth interfaces created.

See:

https://bugzilla.suse.com/show_bug.cgi?id=1136600
https://github.com/systemd/systemd/issues/25555
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/15#note_162509
https://github.com/moby/libnetwork/pull/2380

Specifically the last one was key:

> We set the address before udev gets to the networking rules, so udev
sees /sys/devices/virtual/net/docker0/addr_assign_type = 3
(NET_ADDR_SET). This means there's no need to assign a different
address and everything is fine.

This got me thinking that if we can set the MAC address at the same time the interface is created (in one operation) then this might prevent systemd-udevd from thinking it needs to generate and apply a persistent MAC address.

I've figured out how to update LXD's use of `ip link add` command to apply the MAC, MTU, and other settings directly in a single execution rather than calling `ip link add` first to create the veth pairs, and then subsequently calling `ip link set` afterwards.

Hopefully this should be sufficient to ensure that systemd-udevd always sees the veth interfaces created by LXD as having a manually set MAC address and will leave them alone.

https://github.com/lxc/lxd/pull/11399

Revision history for this message
Jeffrey Chang (modern911) wrote :

Seems the fix is available on lxd latest/edge channel, which is used by most SQA skus.
  latest/edge: git-ab0c905 2023-02-28 (24556) 173MB -

We will keep an eye on recent runs and see.

Revision history for this message
Bas de Bruijne (basdbruijne) wrote :

We did see it again during the 2.9.42 release testing, in this testrun: https://solutions.qa.canonical.com/v2/testruns/42ffea1c-5320-4e91-a43e-4cb5584f6c1c

Again, the machine got the wrong mac and therefore not an ipv4:
---------------------------
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
20: eth0@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether 86:51:e3:d7:7e:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::8451:e3ff:fed7:7ef5/64 scope link
       valid_lft forever preferred_lft forever
22: eth1@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:5f:6e:18 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.246.64.237/21 brd 10.246.71.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe5f:6e18/64 scope link
       valid_lft forever preferred_lft forever
24: eth2@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:62:e3:cf brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.244.8.135/24 brd 10.244.8.255 scope global eth2
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe62:e3cf/64 scope link
       valid_lft forever preferred_lft forever
---------------------------

Here, we are using lxd version 5.0/stable. Crashdumps can be found here: https://oil-jenkins.canonical.com/artifacts/42ffea1c-5320-4e91-a43e-4cb5584f6c1c/index.html

Changed in juju:
status: Triaged → Fix Committed
status: Fix Committed → Triaged
Changed in juju:
milestone: 3.1.1 → 3.1.2
Revision history for this message
Thomas Parrott (tomparrott) wrote :

At the moment the fix is only in `latest/*` channels and not in `5.0/*` channels.
It would be interesting to know if the `latest/*` channels fix it.

Changed in juju:
milestone: 3.1.2 → 3.1.3
Changed in juju:
milestone: 3.1.3 → 3.1.4
Changed in juju:
milestone: 3.1.4 → 3.1.5
Revision history for this message
Joseph Phillips (manadart) wrote :

I've set this to incomplete for Juju and removed the milestones.

Happy to have it reopened if the declared fixes do not work.

Changed in juju:
milestone: 3.1.5 → none
status: Triaged → Incomplete
assignee: Joseph Phillips (manadart) → nobody
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.