rpc CrashLoopBackOff after upgrade from mitaka to newton
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
fuel-ccp |
Invalid
|
Undecided
|
Unassigned |
Bug Description
Steps to reproduce
1. Deploy ccp with configs from https:/
2. Try to run shaker (http://
3. When heat stack was created change versions.yaml to versions.
All rabbit pods except one have a lot of restarts
root@node4:~# kubectl -n ccp get pods -o wide | grep -E '(rpc|notific)'
notifications-
notifications-
notifications-
rpc-1612787118-
rpc-1612787118-
rpc-1612787118-
with one error
2017-04-13 15:32:09.410 - __main__ - DEBUG - Executing cmd:
/opt/ccp/
Traceback (most recent call last):
File "/opt/ccp_
main()
File "/opt/ccp_
do_
File "/opt/ccp_
run_
File "/opt/ccp_
run_
File "/opt/ccp_
raise ProcessExceptio
__main_
31m 31m 1 {kubelet node6} spec.containers
Output of kubectl -n ccp describe pod rpc-1612787118-
This is network hi load issue