"ERROR could not determine leader" but `juju status` says there is a leader
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Canonical Juju |
Triaged
|
Low
|
Unassigned |
Bug Description
(mojo-stg-
2.6.10-trusty-amd64
`juju status` says:
Model Controller Cloud/Region Version SLA Timestamp
stg-ua-event-bus prodstack-is prodstack-
I'm not able to run actions on the leader of an application that is scaled out:
(mojo-stg-
ERROR could not determine leader for "kafka"
In the past, I used to be able to run actions on the leader. My scripts take advantage of this, so I don't have to do `juju status` jq magic to find a unit to run actions on.
What's weird is there seems to be a "leader" in `juju status` output (resorting to jq magic...):
(mojo-stg-
kafka/53
(mojo-stg-
...
"public-address": "10.15.118.104",
"open-ports": [
"9093/tcp"
],
"machine": "101",
"leader": true,
"juju-status": {
"version": "2.6.10",
"since": "15 Nov 2019 21:59:29Z",
"current": "idle"
},
"workload-
"since": "25 Jul 2019 10:48:11Z",
"message": "ready",
"current": "active"
}
}
No obvious errors in the unit logs for this application.
Changed in juju: | |
status: | New → Incomplete |
What happens at the back end is a call to ApplicationLead ers() on the *State struct.
It seems kafka is not in the known list of leadership lease holders.
Can you confirm if you are running with legacy leases or raft leases?
If raft leases, can you look at the leaseholders collection to see if there's a "application- leadership" lease for kafka in there?