Steps to reproduce:
1. create ubuntu cluster 3 ctrl/mongo + 2 compute
2. deploy it
3. add one more controller node to the cluster when deployment process will be finished
4. start deploy
Expected results:
Cluster with new controller is sucessfully deployed
Real results:
Deployment is failed on cluster-haproxy task:
################
priority: 1500
type: puppet
uids:
- '15'
parameters:
puppet_modules: "/etc/puppet/modules"
puppet_manifest: "/etc/puppet/modules/osnailyfacter/modular/cluster-haproxy/cluster-haproxy.pp"
timeout: 3600
cwd: "/"
################
I'm digging little bit dipper and have found that new node has built its own cluster(etc. haproxy resource has not been found in its cib)
################
root@node-15:~# crm_mon --one-shot
Last updated: Thu Sep 10 21:37:00 2015
Last change: Thu Sep 10 13:49:03 2015
Stack: corosync
Current DC: node-15.test.domain.local (15) - partition WITHOUT quorum
Version: 1.1.12-561c4cf
1 Nodes configured
7 Resources configured
Online: [ node-15.test.domain.local ]
Clone Set: clone_p_vrouter [p_vrouter]
Started: [ node-15.test.domain.local ]
vip__management (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__vrouter_pub (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__vrouter (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__public (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__zbx_vip_mgmt (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
Master/Slave Set: master_p_conntrackd [p_conntrackd]
Masters: [ node-15.test.domain.local ]
#################
I have inspected /etc/corosync/corosync.conf file from old controller and didn't find new node in the cluster's node list. Also, I take a look into astute.yaml file on old controller nodes and didn't find 'cluster' task in the task list - http://pastebin.com/ATuduYBB , cause in my opinion this task in charging of corosync.conf file configuration.
So, it seems that we don't re-apply this task for new controller nodes and cluster nodes can not sucessfully communicate with new one. I don't know how we calculate set of tasks for ready controller nodes , but it seems that we should consider 'cluster' task in current case.
Steps to reproduce:
1. create ubuntu cluster 3 ctrl/mongo + 2 compute
2. deploy it
3. add one more controller node to the cluster when deployment process will be finished
4. start deploy
Expected results:
Cluster with new controller is sucessfully deployed
Real results: modules" modules/ osnailyfacter/ modular/ cluster- haproxy/ cluster- haproxy. pp"
Deployment is failed on cluster-haproxy task:
################
priority: 1500
type: puppet
uids:
- '15'
parameters:
puppet_modules: "/etc/puppet/
puppet_manifest: "/etc/puppet/
timeout: 3600
cwd: "/"
################
puppet log from failed node: http:// pastebin. com/cYra4FQD
I'm digging little bit dipper and have found that new node has built its own cluster(etc. haproxy resource has not been found in its cib)
################ test.domain. local (15) - partition WITHOUT quorum
root@node-15:~# crm_mon --one-shot
Last updated: Thu Sep 10 21:37:00 2015
Last change: Thu Sep 10 13:49:03 2015
Stack: corosync
Current DC: node-15.
Version: 1.1.12-561c4cf
1 Nodes configured
7 Resources configured
Online: [ node-15. test.domain. local ]
Clone Set: clone_p_vrouter [p_vrouter] test.domain. local ] ns_IPaddr2) : Started node-15. test.domain. local ns_IPaddr2) : Started node-15. test.domain. local ns_IPaddr2) : Started node-15. test.domain. local ns_IPaddr2) : Started node-15. test.domain. local ns_IPaddr2) : Started node-15. test.domain. local test.domain. local ]
Started: [ node-15.
vip__management (ocf::fuel:
vip__vrouter_pub (ocf::fuel:
vip__vrouter (ocf::fuel:
vip__public (ocf::fuel:
vip__zbx_vip_mgmt (ocf::fuel:
Master/Slave Set: master_p_conntrackd [p_conntrackd]
Masters: [ node-15.
#################
I have inspected /etc/corosync/ corosync. conf file from old controller and didn't find new node in the cluster's node list. Also, I take a look into astute.yaml file on old controller nodes and didn't find 'cluster' task in the task list - http:// pastebin. com/ATuduYBB , cause in my opinion this task in charging of corosync.conf file configuration.
################ modules" manifest: "/etc/puppet/ modules/ osnailyfacter/ modular/ cluster/ cluster. pp"
- priority: 1100
type: puppet
uids:
- '15'
parameters:
puppet_modules: "/etc/puppet/
puppet_
timeout: 3600
cwd: "/"
###############
So, it seems that we don't re-apply this task for new controller nodes and cluster nodes can not sucessfully communicate with new one. I don't know how we calculate set of tasks for ready controller nodes , but it seems that we should consider 'cluster' task in current case.