filtering juju status by machine id gives inconsistent results

Bug #1856533 reported by Andrea Ieri
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Triaged
Low
Unassigned

Bug Description

I often find myself running `juju status <id>` to obtain a view of every unit running on a specific machine. This works great, except that it sometimes returns units from unrelated machines. I have not been able to find a pattern to this, sometimes it's `juju status 0` that returns too much, sometimes it's `juju status 8` or some other id.

This can easily be reproduced by deploying basically any model and iterating through all machine ids: several will return too many entries.

For example, in a bionic-queens openstack-on-lxd deployment:

$ juju status 4
Model Controller Cloud/Region Version SLA Timestamp
aieri-openstack lxd localhost/localhost 2.7.0 unsupported 08:35:04Z

App Version Status Scale Charm Store Rev OS Notes
ceph-osd 12.2.12 active 1 ceph-osd jujucharms 294 ubuntu
openstack-dashboard 13.0.2 active 1 openstack-dashboard jujucharms 297 ubuntu

Unit Workload Agent Machine Public address Ports Message
ceph-osd/0* active idle 4 10.0.8.64 Unit is ready (1 OSD)
openstack-dashboard/0* active idle 21 10.0.8.53 80/tcp,443/tcp Unit is ready

Machine State DNS Inst id Series AZ Message
4 started 10.0.8.64 juju-c3090d-4 bionic Running
21 started 10.0.8.53 juju-c3090d-21 bionic Running

Here are some stats:

$ for i in `seq 0 23`; do juju status $i | grep -A1000 '^Machine' | grep -vE 'Machine|^$' | wc -l; done | sort | uniq -c
     17 1
      4 2
      2 3
      1 6

Basically, in my deployment filtering is correct for 17 out of 24 machines, but fails for the remaining 7, which return 2, 3, or even 6 entries.

Here is the worst one:

$ juju status 8
Model Controller Cloud/Region Version SLA Timestamp
aieri-openstack lxd localhost/localhost 2.7.0 unsupported 08:48:18Z

App Version Status Scale Charm Store Rev OS Notes
ceph-radosgw 12.2.12 active 1 ceph-radosgw jujucharms 283 ubuntu
cinder 12.0.7 active 1 cinder jujucharms 297 ubuntu
cinder-ceph 12.0.7 active 0 cinder-ceph jujucharms 251 ubuntu
gnocchi 4.2.5 active 1 gnocchi jujucharms 30 ubuntu
heat 10.0.2 active 1 heat jujucharms 271 ubuntu
nova-cloud-controller 17.0.11 active 1 nova-cloud-controller jujucharms 339 ubuntu
openstack-dashboard 13.0.2 active 1 openstack-dashboard jujucharms 297 ubuntu

Unit Workload Agent Machine Public address Ports Message
ceph-radosgw/0* active idle 7 10.0.8.37 80/tcp Unit is ready
cinder/0* active idle 8 10.0.8.70 8776/tcp Unit is ready
  cinder-ceph/0* active idle 10.0.8.70 Unit is ready
gnocchi/0* active idle 12 10.0.8.42 8041/tcp Unit is ready
heat/0* active idle 13 10.0.8.18 8000/tcp,8004/tcp Unit is ready
nova-cloud-controller/0* active idle 19 10.0.8.81 8774/tcp,8778/tcp Unit is ready
openstack-dashboard/0* active idle 21 10.0.8.53 80/tcp,443/tcp Unit is ready

Machine State DNS Inst id Series AZ Message
7 started 10.0.8.37 juju-c3090d-7 bionic Running
8 started 10.0.8.70 juju-c3090d-8 bionic Running
12 started 10.0.8.42 juju-c3090d-12 bionic Running
13 started 10.0.8.18 juju-c3090d-13 bionic Running
19 started 10.0.8.81 juju-c3090d-19 bionic Running
21 started 10.0.8.53 juju-c3090d-21 bionic Running

Revision history for this message
Andrea Ieri (aieri) wrote :

...sorry for the mangled output. Here it is again in a pastebin: https://pastebin.canonical.com/p/PWfnrkxjV9/

Revision history for this message
Junien Fridrick (axino) wrote :

Related (if not duplicate) : bug 1809411

Changed in juju:
status: New → Triaged
importance: Undecided → Medium
Revision history for this message
Canonical Juju QA Bot (juju-qa-bot) wrote :

This bug has not been updated in 2 years, so we're marking it Low importance. If you believe this is incorrect, please update the importance.

Changed in juju:
importance: Medium → Low
tags: added: expirebugs-bot
Revision history for this message
Haw Loeung (hloeung) wrote :

I think this is LP;1988709. `juju status 4` returns machine `21` because it has `443/tcp`. `juju status 8` because `80/tcp`, `8776/tcp`, `8041/tcp`, etc.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.