ceilometer.hardware.discovery polls nova api without bound
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceilometer |
Fix Released
|
High
|
jasonamyers | ||
Kilo |
Fix Released
|
Undecided
|
Unassigned |
Bug Description
The hardware (snmp) pollster discovery mechanism, in the default config, polls the Nova API for a list of all servers, every single polling cycle.
This is akin the problem described in the compute-agent resource metadata caching spec: http://
If this is happening every in a default ceilometer install, with a default pipeline, it would be good to make the call have less impact.
Several things to consider:
* If changes-since is used to make subsequent calls have less impact that still means the first call will have large impact both in terms of cpu and memory usage on the nova-api side, and the memory impact _will_remain_ because of the way Python does memory allocation.
* At the moment there are three ways to turn off or change the hardware pollsters behavior:
** remove it from entry points
** set resources explicitly in pipeline.yaml
** disable hardware.* metrics in pipeline.yaml
I'm not sure this counts specifically as a bug but I wanted to get it registered somehow, lest I forget.
Changed in ceilometer: | |
assignee: | nobody → jasonamyers (jason-jasonamyers) |
Changed in ceilometer: | |
importance: | Undecided → High |
status: | New → Triaged |
Changed in ceilometer: | |
milestone: | none → liberty-2 |
status: | Fix Committed → Fix Released |
Changed in ceilometer: | |
milestone: | liberty-2 → 5.0.0 |
Fix proposed to branch: master /review. openstack. org/206665
Review: https:/