Comment 5 for bug 1956981

Revision history for this message
Bas de Bruijne (basdbruijne) wrote :

In testrun https://solutions.qa.canonical.com/testruns/testRun/fd79805c-8f0c-4965-af14-e01017439fe9 I looked around on the life env. Here, 2 machines are in this state:

```
Machine State Address Inst id Series AZ Message
0 started 10.246.167.190 solqa-lab1-server-07 jammy zone1 Deployed
0/lxd/0 started 10.246.167.149 juju-6ba804-0-lxd-0 jammy zone1 Container started
0/lxd/1 pending juju-6ba804-0-lxd-1 jammy zone1 Container started
0/lxd/2 started 10.246.164.253 juju-6ba804-0-lxd-2 jammy zone1 Container started
0/lxd/3 pending juju-6ba804-0-lxd-3 jammy zone1 Container started
0/lxd/4 started 10.246.166.148 juju-6ba804-0-lxd-4 jammy zone1 Container started
0/lxd/5 started 10.246.165.82 juju-6ba804-0-lxd-5 jammy zone1 Container started
0/lxd/6 started 10.246.167.96 juju-6ba804-0-lxd-6 jammy zone1 Container started
0/lxd/7 started 10.246.167.159 juju-6ba804-0-lxd-7 jammy zone1 Container started
0/lxd/8 started 10.246.166.215 juju-6ba804-0-lxd-8 jammy zone1 Container started
0/lxd/9 started 10.246.165.72 juju-6ba804-0-lxd-9 jammy zone1 Container started
0/lxd/10 started 10.246.164.203 juju-6ba804-0-lxd-10 jammy zone1 Container started
```

But logging on to the machines themselves shows no problem:
```
ubuntu@solqa-lab1-server-07:~$ sudo lxc list
To start your first container, try: lxc launch ubuntu:22.04
Or for a virtual machine: lxc launch ubuntu:22.04 --vm

+----------------------+---------+-----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+----------------------+---------+-----------------------+------+-----------+-----------+
| juju-6ba804-0-lxd-0 | RUNNING | 10.246.173.8 (eth1) | | CONTAINER | 0 |
| | | 10.246.172.111 (eth1) | | | |
| | | 10.246.169.47 (eth0) | | | |
| | | 10.246.168.111 (eth0) | | | |
| | | 10.246.167.149 (eth2) | | | |
+----------------------+---------+-----------------------+------+-----------+-----------+
| juju-6ba804-0-lxd-1 | RUNNING | 10.246.176.28 (eth1) | | CONTAINER | 0 |
| | | 10.246.172.62 (eth0) | | | |
+----------------------+---------+-----------------------+------+-----------+-----------+
| juju-6ba804-0-lxd-2 | RUNNING | 10.246.172.251 (eth1) | | CONTAINER | 0 |
| | | 10.246.168.255 (eth0) | | | |
| | | 10.246.164.253 (eth2) | | | |
+----------------------+---------+-----------------------+------+-----------+-----------+
| juju-6ba804-0-lxd-3 | RUNNING | 10.246.169.48 (eth0) | | CONTAINER | 0 |
+----------------------+---------+-----------------------+------+-----------+-----------+
| juju-6ba804-0-lxd-4 | RUNNING | 10.246.173.9 (eth1) | | CONTAINER | 0 |
| | | 10.246.172.117 (eth1) | | | |
| | | 10.246.169.49 (eth0) | | | |
| | | 10.246.168.117 (eth0) | | | |
| | | 10.246.166.148 (eth2) | | | |
+----------------------+---------+-----------------------+------+-----------+-----------+
| juju-6ba804-0-lxd-5 | RUNNING | 10.246.169.0 (eth0) | | CONTAINER | 0 |
| | | 10.246.165.82 (eth1) | | | |
+----------------------+---------+-----------------------+------+-----------+-----------+
| juju-6ba804-0-lxd-6 | RUNNING | 10.246.172.250 (eth1) | | CONTAINER | 0 |
| | | 10.246.168.254 (eth0) | | | |
| | | 10.246.167.96 (eth2) | | | |
+----------------------+---------+-----------------------+------+-----------+-----------+
| juju-6ba804-0-lxd-7 | RUNNING | 10.246.172.55 (eth1) | | CONTAINER | 0 |
| | | 10.246.172.123 (eth1) | | | |
| | | 10.246.169.176 (eth0) | | | |
| | | 10.246.168.123 (eth0) | | | |
| | | 10.246.167.159 (eth2) | | | |
+----------------------+---------+-----------------------+------+-----------+-----------+
| juju-6ba804-0-lxd-8 | RUNNING | 10.246.172.57 (eth1) | | CONTAINER | 0 |
| | | 10.246.172.132 (eth1) | | | |
| | | 10.246.169.178 (eth0) | | | |
| | | 10.246.168.132 (eth0) | | | |
| | | 10.246.166.215 (eth2) | | | |
+----------------------+---------+-----------------------+------+-----------+-----------+
| juju-6ba804-0-lxd-9 | RUNNING | 10.246.169.46 (eth0) | | CONTAINER | 0 |
| | | 10.246.165.72 (eth1) | | | |
+----------------------+---------+-----------------------+------+-----------+-----------+
| juju-6ba804-0-lxd-10 | RUNNING | 10.246.172.252 (eth1) | | CONTAINER | 0 |
| | | 10.246.172.124 (eth1) | | | |
| | | 10.246.169.1 (eth0) | | | |
| | | 10.246.168.124 (eth0) | | | |
| | | 10.246.164.203 (eth2) | | | |
+----------------------+---------+-----------------------+------+-----------+-----------+
```

The spaces in this deployment are:
```
ubuntu@lab1-silo2-cpe-fd79805c-8f0c-4965-af14-e01017439fe9:/root$ juju spaces
Name Space ID Subnets
alpha 0
oam-space 1 10.246.164.0/22
public-space 2 10.246.172.0/22
internal-space 3 10.246.168.0/22
external-space 4 10.246.180.0/22
ceph-replica-space 5 10.246.176.0/22
undefined 6 10.36.87.0/24
```
So both these machines didn't get an address in the oam-space.

I tried adding another unit of the charm on 0/lxd/1:
```
ubuntu@lab1-silo2-cpe-fd79805c-8f0c-4965-af14-e01017439fe9:/root$ juju add-unit ceph-mon --to lxd:0
```

And the new unit that came up did have an address on the oam-space (even though they should be the same):
```
ubuntu@solqa-lab1-server-07:~$ sudo lxc list
+----------------------+---------+-----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+----------------------+---------+-----------------------+------+-----------+-----------+
...
+----------------------+---------+-----------------------+------+-----------+-----------+
| juju-6ba804-0-lxd-1 | RUNNING | 10.246.176.28 (eth1) | | CONTAINER | 0 |
| | | 10.246.172.62 (eth0) | | | |
+----------------------+---------+-----------------------+------+-----------+-----------+
...
+----------------------+---------+-----------------------+------+-----------+-----------+
| juju-6ba804-0-lxd-11 | RUNNING | 10.246.176.66 (eth1) | | CONTAINER | 0 |
| | | 10.246.173.183 (eth0) | | | |
| | | 10.246.166.53 (eth2) | | | |
+----------------------+---------+-----------------------+------+-----------+-----------+
```

Nothing stands out to me in the logs on why this is happening.

The crashdumps should appear here when the testrun finished:
https://oil-jenkins.canonical.com/artifacts/fd79805c-8f0c-4965-af14-e01017439fe9/index.html