High CPU usage on 2.7 edge
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Canonical Juju |
Fix Released
|
High
|
Ian Booth |
Bug Description
I have deployed a 2.7 edge Juju controller to Microk8s, and I'm getting 300% CPU usage from this process:
/var/
I don't yet have any models/applications deployed to it, so it's just the controller. The controller works properly, apart from the high CPU usage. When I look at the container logs, I don't see anything that looks relevant to the high CPU usage:
2019-10-07 20:27:12 INFO juju.cmd supercommand.go:79 running jujud [2.7-beta1 gc go1.12.10]
2019-10-07 20:27:12 INFO juju.agent identity.go:28 removing system identity file
2019-10-07 20:27:12 WARNING juju.mongo open.go:160 mongodb connection failed, will retry: dial tcp 127.0.0.1:37017: connect: connection refused
2019-10-07 20:27:12 WARNING juju.mongo open.go:160 mongodb connection failed, will retry: dial tcp 127.0.0.1:37017: connect: connection refused
2019-10-07 20:27:13 WARNING juju.mongo open.go:160 mongodb connection failed, will retry: dial tcp 127.0.0.1:37017: connect: connection refused
2019-10-07 20:27:14 INFO juju.replicaset replicaset.go:58 Initiating replicaset with config: {
Name: juju,
Version: 1,
Protocol Version: 1,
Members: {
{1 "localhost:37017" juju-machine-id:0 voting},
},
}
2019-10-07 20:27:14 INFO juju.worker.
2019-10-07 20:27:14 INFO juju.worker.
2019-10-07 20:27:14 INFO juju.cmd.jujud bootstrap.go:479 started mongo
2019-10-07 20:27:16 INFO juju.state open.go:144 using client-side transactions
2019-10-07 20:27:17 INFO juju.state logs.go:91 controller settings not found, early stage initialization assumed
2019-10-07 20:27:17 INFO juju.state state.go:467 starting standard state workers
2019-10-07 20:27:17 INFO juju.state state.go:474 creating cloud image metadata storage
2019-10-07 20:27:17 INFO juju.state state.go:480 started state for model-b95eca3b-
2019-10-07 20:27:17 INFO juju.state initialize.go:184 initializing controller model b95eca3b-
2019-10-07 20:27:18 INFO juju.state logs.go:169 creating logs collection for b95eca3b-
2019-10-07 20:27:18 INFO juju.agent.
2019-10-07 20:27:18 WARNING juju.cmd.jujud bootstrap.go:357 cannot set up Juju GUI: cannot fetch GUI info: GUI metadata not found
2019-10-07 20:27:18 INFO cmd supercommand.go:525 command finished
2019-10-07 20:27:18 INFO juju.cmd supercommand.go:79 running jujud [2.7-beta1 gc go1.12.10]
2019-10-07 20:27:19 INFO juju.worker.
2019-10-07 20:27:19 INFO juju.state open.go:144 using client-side transactions
2019-10-07 20:27:19 INFO juju.state state.go:467 starting standard state workers
2019-10-07 20:27:19 INFO juju.state state.go:474 creating cloud image metadata storage
2019-10-07 20:27:19 INFO juju.state state.go:480 started state for model-b95eca3b-
2019-10-07 20:27:19 INFO juju.cmd.jujud machine.go:1022 juju database opened
2019-10-07 20:27:19 INFO juju.worker.
2019-10-07 20:27:19 INFO juju.worker.
2019-10-07 20:27:19 INFO juju.api apiclient.go:624 connection established to "wss://
2019-10-07 20:27:20 INFO juju.state open.go:144 using client-side transactions
2019-10-07 20:27:20 INFO juju.worker.
2019-10-07 20:27:20 INFO juju.apiserver.
2019-10-07 20:27:20 INFO juju.apiserver.
2019-10-07 20:27:20 INFO juju.worker.
2019-10-07 20:27:20 INFO juju.worker.
2019-10-07 20:27:20 INFO juju.api apiclient.go:624 connection established to "wss://
2019-10-07 20:27:20 INFO juju.worker.
2019-10-07 20:27:20 INFO juju.worker.logger logger.go:118 logger worker started
2019-10-07 20:27:20 INFO juju.agent identity.go:28 removing system identity file
b95eca3b-
b95eca3b-
b95eca3b-
b95eca3b-
b95eca3b-
b95eca3b-
b95eca3b-
b95eca3b-
b95eca3b-
b95eca3b-
b95eca3b-
b95eca3b-
b95eca3b-
b95eca3b-
b95eca3b-
b95eca3b-
b95eca3b-
b95eca3b-
b95eca3b-
b95eca3b-
2019-10-07 20:27:26 INFO juju.worker.logger logger.go:118 logger worker started
Changed in juju: | |
status: | In Progress → Fix Committed |
Changed in juju: | |
status: | Fix Committed → Fix Released |
When I deploy a number of charms to MicroK8s running on an AWS instance, I'm also getting 1.5G of memory used, and I think that this is responsible for causing some CI errors due to a Kubeflow deploy timing out, even though I later SSHed into the AWS instance and checked that all of the pods were running properly (so it looks like it was just really slow to deploy).