multi-cloudkitty-processor daemons will get the incorrect cost

Bug #1675635 reported by zhangguoqing
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
cloudkitty
New
Undecided
zhangguoqing

Bug Description

In our environment that have three controller nodes which everyone controller node had run a clouditty-api daemon and a cloudkitty-procssor daemon. All of them are working well, but we find that the total cost is incorrect. It more than the actual consumed, because the three cloudkitty-processor daemons may do the repetitive Charging. It is proved by stopping two of those cloudkitty-processor daemons.

So, I think we should make multi-cloudkitty-processor daemons coordinate.

Changed in cloudkitty:
assignee: nobody → zhangguoqing (474751729-o)
Revision history for this message
zhangguoqing (474751729-o) wrote :

If I want to run two cloudkitty-processor daemons in one same node, when the second daemon starting , it reports below information:

/opt/stack/cloudkitty/cloudkitty/messaging.py:67: FutureWarning: The access_policy argument is changing its default value to <class 'oslo_messaging.rpc.dispatcher.DefaultRPCAccessPolicy'> in version '?', please update the code to explicitly set None as the value: access_policy defaults to LegacyRPCAccessPolicy which exposes private methods. Explicitly set access_policy to DefaultRPCAccessPolicy or ExplicitRPCAccessPolicy.
  endpoints, executor='eventlet')

Revision history for this message
zhangguoqing (474751729-o) wrote :
Download full text (4.5 KiB)

In the same node, I set the coordination_url=file:///var/lib/cloudkitty/locks in orchestrator configure section and run two cloudkitty-processor daemons, the total rating is correct and the cloudkitty-process daemons have the high availability characteristic.

But in two nodes, I set the coordination_url=redis://172.16.40.6:6379 and run one cloudkitty-processor daemon respective,the total rating is not correct. And the running daemon will report the below same error information on all both nodes:
2017-03-29 09:16:52.559 6474 WARNING tooz.drivers.redis [-] Unable to heartbeat lock '<tooz.drivers.redis.RedisLock object at 0x7f57daea45d0>'
2017-03-29 09:16:52.559 6474 ERROR tooz.drivers.redis Traceback (most recent call last):
2017-03-29 09:16:52.559 6474 ERROR tooz.drivers.redis File "/usr/local/lib/python2.7/dist-packages/tooz/drivers/redis.py", line 509, in heartbeat
2017-03-29 09:16:52.559 6474 ERROR tooz.drivers.redis lock.heartbeat()
2017-03-29 09:16:52.559 6474 ERROR tooz.drivers.redis File "/usr/local/lib/python2.7/dist-packages/tooz/drivers/redis.py", line 108, in heartbeat
2017-03-29 09:16:52.559 6474 ERROR tooz.drivers.redis self._lock.extend(self._lock.timeout)
2017-03-29 09:16:52.559 6474 ERROR tooz.drivers.redis File "/usr/lib/python2.7/contextlib.py", line 35, in __exit__
2017-03-29 09:16:52.559 6474 ERROR tooz.drivers.redis self.gen.throw(type, value, traceback)
2017-03-29 09:16:52.559 6474 ERROR tooz.drivers.redis File "/usr/local/lib/python2.7/dist-packages/tooz/drivers/redis.py", line 55, in _translate_failures
2017-03-29 09:16:52.559 6474 ERROR tooz.drivers.redis cause=e)
2017-03-29 09:16:52.559 6474 ERROR tooz.drivers.redis File "/usr/local/lib/python2.7/dist-packages/tooz/utils.py", line 225, in raise_with_cause
2017-03-29 09:16:52.559 6474 ERROR tooz.drivers.redis excutils.raise_with_cause(exc_cls, message, *args, **kwargs)
2017-03-29 09:16:52.559 6474 ERROR tooz.drivers.redis File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 143, in raise_with_cause
2017-03-29 09:16:52.559 6474 ERROR tooz.drivers.redis six.raise_from(exc_cls(message, *args, **kwargs), kwargs.get('cause'))
2017-03-29 09:16:52.559 6474 ERROR tooz.drivers.redis File "/usr/local/lib/python2.7/dist-packages/six.py", line 718, in raise_from
2017-03-29 09:16:52.559 6474 ERROR tooz.drivers.redis raise value
2017-03-29 09:16:52.559 6474 ERROR tooz.drivers.redis ToozError: Cannot extend an unlocked lock
2017-03-29 09:16:52.559 6474 ERROR tooz.drivers.redis
2017-03-29 09:16:53.574 6474 WARNING tooz.drivers.redis [-] Unable to heartbeat lock '<tooz.drivers.redis.RedisLock object at 0x7f57daea45d0>'
2017-03-29 09:16:53.574 6474 ERROR tooz.drivers.redis Traceback (most recent call last):
2017-03-29 09:16:53.574 6474 ERROR tooz.drivers.redis File "/usr/local/lib/python2.7/dist-packages/tooz/drivers/redis.py", line 509, in heartbeat
2017-03-29 09:16:53.574 6474 ERROR tooz.drivers.redis lock.heartbeat()
2017-03-29 09:16:53.574 6474 ERROR tooz.drivers.redis File "/usr/local/lib/python2.7/dist-packages/tooz/drivers/redis.py", line 108, in heartbeat
2017-03-29 09:16:53.574 6474 ERROR tooz.drivers.redi...

Read more...

Revision history for this message
Maxime Cottret (aolwas) wrote :

Hi Guoqing,

Have you tried with another backend like SQL to see if it comes from our code or from redis backend ?

Revision history for this message
Christian Zunker (christian-zunker) wrote :

Hi,

I use zookeeper as coordination backend and get no error messages:
coordination_url = zookeeper://172.20.231.35:2181,172.20.66.112:2181,172.20.55.188:2181

cloudkitty version: 7.0.0
kazoo version: 2.4.0
tooz version: 1.58.0

We are running three cloudkitty-processors across three servers.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.