2015-09-10 21:46:17 |
slava valyavskiy |
bug |
|
|
added bug |
2015-09-10 21:57:21 |
slava valyavskiy |
description |
Steps to reproduce:
1. create ubuntu cluster 3 ctrl/mongo + 2 compute
2. deploy it
3. add one more controller node to the cluster when deployment process will be finished
4. start deploy
Expected results:
Cluster with new controller is sucessfully deployed
Real results:
Deployment is failed on cluster-haproxy task:
################
priority: 1500
type: puppet
uids:
- '15'
parameters:
puppet_modules: "/etc/puppet/modules"
puppet_manifest: "/etc/puppet/modules/osnailyfacter/modular/cluster-haproxy/cluster-haproxy.pp"
timeout: 3600
cwd: "/"
################
puppet log from failed node: http://pastebin.com/cYra4FQD
I'm digging little bit dipper and have found that new node has built its own cluster(etc. haproxy resource has not been found in its cib)
################
root@node-15:~# crm_mon --one-shot
Last updated: Thu Sep 10 21:37:00 2015
Last change: Thu Sep 10 13:49:03 2015
Stack: corosync
Current DC: node-15.test.domain.local (15) - partition WITHOUT quorum
Version: 1.1.12-561c4cf
1 Nodes configured
7 Resources configured
Online: [ node-15.test.domain.local ]
Clone Set: clone_p_vrouter [p_vrouter]
Started: [ node-15.test.domain.local ]
vip__management (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__vrouter_pub (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__vrouter (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__public (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__zbx_vip_mgmt (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
Master/Slave Set: master_p_conntrackd [p_conntrackd]
Masters: [ node-15.test.domain.local ]
#################
I have inspected /etc/corosync/corosync.conf file from old controller and didn't find new node in the cluster's node list. Also, I take a look into astute.yaml file on old controller nodes and didn't find 'cluster' task in the task list - http://pastebin.com/ATuduYBB , cause in my opinion this task in charging of corosync.conf file configuration.
################
- priority: 1100
type: puppet
uids:
- '15'
parameters:
puppet_modules: "/etc/puppet/modules"
puppet_manifest: "/etc/puppet/modules/osnailyfacter/modular/cluster/cluster.pp"
timeout: 3600
cwd: "/"
###############
So, it seems that we don't re-apply this task for new controller nodes and cluster nodes can not sucessfully communicate with new one. I don't know how we calculate set of tasks for ready controller nodes , but it seems that we should consider 'cluster' task in current case. |
Steps to reproduce:
1. create ubuntu cluster 3 ctrl/mongo + 2 compute
2. deploy it
3. add one more controller node to the cluster when deployment process will be finished
4. start deploy
Expected results:
Cluster with new controller is sucessfully deployed
Real results:
Deployment is failed on cluster-haproxy task:
################
priority: 1500
type: puppet
uids:
- '15'
parameters:
puppet_modules: "/etc/puppet/modules"
puppet_manifest: "/etc/puppet/modules/osnailyfacter/modular/cluster-haproxy/cluster-haproxy.pp"
timeout: 3600
cwd: "/"
################
puppet log from failed node: http://pastebin.com/cYra4FQD
I'm digging little bit dipper and have found that new node has built its own cluster(etc. haproxy resource has not been found in its cib)
################
root@node-15:~# crm_mon --one-shot
Last updated: Thu Sep 10 21:37:00 2015
Last change: Thu Sep 10 13:49:03 2015
Stack: corosync
Current DC: node-15.test.domain.local (15) - partition WITHOUT quorum
Version: 1.1.12-561c4cf
1 Nodes configured
7 Resources configured
Online: [ node-15.test.domain.local ]
Clone Set: clone_p_vrouter [p_vrouter]
Started: [ node-15.test.domain.local ]
vip__management (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__vrouter_pub (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__vrouter (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__public (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__zbx_vip_mgmt (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
Master/Slave Set: master_p_conntrackd [p_conntrackd]
Masters: [ node-15.test.domain.local ]
#################
I have inspected /etc/corosync/corosync.conf file from old controller and didn't find new node in the cluster's node list. Also, I take a look into astute.yaml file on old controller nodes and didn't find 'cluster' task in the task list - http://pastebin.com/ATuduYBB , cause in my opinion this task in charging of corosync.conf file configuration.
################
- priority: 1100
type: puppet
uids:
- '15'
parameters:
puppet_modules: "/etc/puppet/modules"
puppet_manifest: "/etc/puppet/modules/osnailyfacter/modular/cluster/cluster.pp"
timeout: 3600
cwd: "/"
###############
So, it seems that we don't re-apply this task for new controller nodes and cluster nodes can not sucessfully communicate with new one. I don't know how we calculate set of tasks for ready controller nodes , but it seems that we should consider 'cluster' task in current case. |
|
2015-09-10 22:00:16 |
slava valyavskiy |
description |
Steps to reproduce:
1. create ubuntu cluster 3 ctrl/mongo + 2 compute
2. deploy it
3. add one more controller node to the cluster when deployment process will be finished
4. start deploy
Expected results:
Cluster with new controller is sucessfully deployed
Real results:
Deployment is failed on cluster-haproxy task:
################
priority: 1500
type: puppet
uids:
- '15'
parameters:
puppet_modules: "/etc/puppet/modules"
puppet_manifest: "/etc/puppet/modules/osnailyfacter/modular/cluster-haproxy/cluster-haproxy.pp"
timeout: 3600
cwd: "/"
################
puppet log from failed node: http://pastebin.com/cYra4FQD
I'm digging little bit dipper and have found that new node has built its own cluster(etc. haproxy resource has not been found in its cib)
################
root@node-15:~# crm_mon --one-shot
Last updated: Thu Sep 10 21:37:00 2015
Last change: Thu Sep 10 13:49:03 2015
Stack: corosync
Current DC: node-15.test.domain.local (15) - partition WITHOUT quorum
Version: 1.1.12-561c4cf
1 Nodes configured
7 Resources configured
Online: [ node-15.test.domain.local ]
Clone Set: clone_p_vrouter [p_vrouter]
Started: [ node-15.test.domain.local ]
vip__management (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__vrouter_pub (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__vrouter (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__public (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__zbx_vip_mgmt (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
Master/Slave Set: master_p_conntrackd [p_conntrackd]
Masters: [ node-15.test.domain.local ]
#################
I have inspected /etc/corosync/corosync.conf file from old controller and didn't find new node in the cluster's node list. Also, I take a look into astute.yaml file on old controller nodes and didn't find 'cluster' task in the task list - http://pastebin.com/ATuduYBB , cause in my opinion this task in charging of corosync.conf file configuration.
################
- priority: 1100
type: puppet
uids:
- '15'
parameters:
puppet_modules: "/etc/puppet/modules"
puppet_manifest: "/etc/puppet/modules/osnailyfacter/modular/cluster/cluster.pp"
timeout: 3600
cwd: "/"
###############
So, it seems that we don't re-apply this task for new controller nodes and cluster nodes can not sucessfully communicate with new one. I don't know how we calculate set of tasks for ready controller nodes , but it seems that we should consider 'cluster' task in current case. |
ISO info:
####################
VERSION:
feature_groups:
- mirantis
production: "docker"
release: "7.0"
openstack_version: "2015.1.0-7.0"
api: "1.0"
build_number: "263"
build_id: "263"
######################
Steps to reproduce:
1. update FM node from 5.1 to 7.0 release
2. create ubuntu cluster 3 ctrl/mongo + 2 compute
3. deploy it
4. add one more controller node to the cluster when deployment process will be finished
5. start deploy
Expected results:
Cluster with new controller is sucessfully deployed
Real results:
Deployment is failed on cluster-haproxy task:
################
priority: 1500
type: puppet
uids:
- '15'
parameters:
puppet_modules: "/etc/puppet/modules"
puppet_manifest: "/etc/puppet/modules/osnailyfacter/modular/cluster-haproxy/cluster-haproxy.pp"
timeout: 3600
cwd: "/"
################
puppet log from failed node: http://pastebin.com/cYra4FQD
I'm digging little bit dipper and have found that new node has built its own cluster(etc. haproxy resource has not been found in its cib)
################
root@node-15:~# crm_mon --one-shot
Last updated: Thu Sep 10 21:37:00 2015
Last change: Thu Sep 10 13:49:03 2015
Stack: corosync
Current DC: node-15.test.domain.local (15) - partition WITHOUT quorum
Version: 1.1.12-561c4cf
1 Nodes configured
7 Resources configured
Online: [ node-15.test.domain.local ]
Clone Set: clone_p_vrouter [p_vrouter]
Started: [ node-15.test.domain.local ]
vip__management (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__vrouter_pub (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__vrouter (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__public (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
vip__zbx_vip_mgmt (ocf::fuel:ns_IPaddr2): Started node-15.test.domain.local
Master/Slave Set: master_p_conntrackd [p_conntrackd]
Masters: [ node-15.test.domain.local ]
#################
I have inspected /etc/corosync/corosync.conf file from old controller and didn't find new node in the cluster's node list. Also, I take a look into astute.yaml file on old controller nodes and didn't find 'cluster' task in the task list - http://pastebin.com/ATuduYBB , cause in my opinion this task in charging of corosync.conf file configuration.
################
- priority: 1100
type: puppet
uids:
- '15'
parameters:
puppet_modules: "/etc/puppet/modules"
puppet_manifest: "/etc/puppet/modules/osnailyfacter/modular/cluster/cluster.pp"
timeout: 3600
cwd: "/"
###############
So, it seems that we don't re-apply this task for new controller nodes and cluster nodes can not sucessfully communicate with new one. I don't know how we calculate set of tasks for ready controller nodes , but it seems that we should consider 'cluster' task in current case. |
|
2015-09-10 22:00:28 |
Oleg S. Gelbukh |
bug |
|
|
added subscriber Oleg S. Gelbukh |
2015-09-10 22:01:21 |
slava valyavskiy |
attachment added |
|
astute.log https://bugs.launchpad.net/fuel/+bug/1494507/+attachment/4460917/+files/astute.log |
|
2015-09-10 22:33:06 |
slava valyavskiy |
fuel: assignee |
|
Fuel Library Team (fuel-library) |
|
2015-09-10 23:22:39 |
Ksenia Svechnikova |
fuel: milestone |
|
7.0 |
|
2015-09-10 23:24:19 |
Ksenia Svechnikova |
nominated for series |
|
fuel/7.0.x |
|
2015-09-10 23:24:19 |
Ksenia Svechnikova |
bug task added |
|
fuel/7.0.x |
|
2015-09-11 04:30:41 |
Nastya Urlapova |
fuel: status |
New |
Incomplete |
|
2015-09-11 07:58:19 |
Ivan Kliuk |
fuel/7.0.x: assignee |
|
Fuel Library Team (fuel-library) |
|
2015-09-11 07:58:23 |
Ivan Kliuk |
fuel/7.0.x: status |
New |
Incomplete |
|
2015-09-11 08:31:02 |
Michael Polenchuk |
bug |
|
|
added subscriber Michael Polenchuk |
2015-09-28 12:27:38 |
Oleg S. Gelbukh |
fuel/7.0.x: assignee |
Fuel Library Team (fuel-library) |
Fuel Octane Dev Team (fuel-octane) |
|
2015-09-29 19:19:15 |
OpenStack Infra |
fuel: status |
Incomplete |
In Progress |
|
2015-09-29 19:19:15 |
OpenStack Infra |
fuel: assignee |
Fuel Library Team (fuel-library) |
Oleg S. Gelbukh (gelbuhos) |
|
2015-09-29 19:19:50 |
Oleg S. Gelbukh |
fuel: importance |
Undecided |
High |
|
2015-09-29 19:20:06 |
Oleg S. Gelbukh |
fuel: milestone |
7.0 |
7.0-updates |
|
2015-09-29 21:38:32 |
Mike Scherbakov |
nominated for series |
|
fuel/8.0.x |
|
2015-09-29 21:38:32 |
Mike Scherbakov |
bug task added |
|
fuel/8.0.x |
|
2015-09-29 21:38:50 |
Mike Scherbakov |
fuel/7.0.x: milestone |
|
7.0-updates |
|
2015-09-29 21:38:53 |
Mike Scherbakov |
fuel/8.0.x: milestone |
7.0-updates |
8.0 |
|
2015-09-29 21:39:00 |
Mike Scherbakov |
fuel/7.0.x: status |
Incomplete |
Confirmed |
|
2015-10-02 17:27:13 |
Oleg S. Gelbukh |
fuel/7.0.x: importance |
Undecided |
High |
|
2015-10-02 17:27:19 |
Oleg S. Gelbukh |
fuel/7.0.x: status |
Confirmed |
In Progress |
|
2015-10-02 17:27:48 |
Oleg S. Gelbukh |
bug task deleted |
fuel/8.0.x |
|
|
2015-10-06 14:28:45 |
Matthew Mosesohn |
fuel/7.0.x: status |
In Progress |
Fix Committed |
|
2015-10-08 13:42:33 |
Oleg S. Gelbukh |
tags |
|
feature-upgrade |
|
2015-10-08 14:17:37 |
Oleg S. Gelbukh |
tags |
feature-upgrade |
feature-upgrade module-octane |
|
2015-10-22 03:57:08 |
Dmitry Pyzhov |
tags |
feature-upgrade module-octane |
area-octane feature-upgrade module-octane |
|
2015-10-29 17:47:01 |
Dmitry Pyzhov |
tags |
area-octane feature-upgrade module-octane |
area-python feature-upgrade module-octane |
|
2015-11-17 23:39:56 |
OpenStack Infra |
fuel: assignee |
Oleg S. Gelbukh (gelbuhos) |
Yuriy Taraday (yorik-sar) |
|
2015-11-18 11:54:21 |
OpenStack Infra |
fuel: status |
In Progress |
Fix Committed |
|
2015-12-22 11:55:00 |
Bogdan Dobrelya |
fuel: status |
Fix Committed |
Confirmed |
|
2015-12-22 12:02:13 |
slava valyavskiy |
fuel: assignee |
Yuriy Taraday (yorik-sar) |
|
|
2015-12-22 12:02:26 |
slava valyavskiy |
fuel: assignee |
|
Fuel Python Team (fuel-python) |
|
2015-12-22 13:04:43 |
Bogdan Dobrelya |
tags |
area-python feature-upgrade module-octane |
area-python feature-upgrade granular life-cycle-management module-octane |
|
2015-12-22 14:36:30 |
Bogdan Dobrelya |
fuel: status |
Confirmed |
Fix Committed |
|
2016-01-15 08:31:53 |
Vladimir |
tags |
area-python feature-upgrade granular life-cycle-management module-octane |
area-python feature-upgrade granular life-cycle-management module-octane on-verification |
|
2016-01-22 08:14:14 |
Tatyanka |
fuel: status |
Fix Committed |
Fix Released |
|
2016-01-29 09:10:25 |
Vladimir |
tags |
area-python feature-upgrade granular life-cycle-management module-octane on-verification |
area-python feature-upgrade granular life-cycle-management module-octane |
|
2016-05-18 18:20:55 |
Oleg S. Gelbukh |
fuel/7.0.x: assignee |
Registry Administrators (registry) |
Fuel Python (Deprecated) (fuel-python) |
|
2016-05-18 18:21:10 |
Oleg S. Gelbukh |
fuel/7.0.x: assignee |
Fuel Python (Deprecated) (fuel-python) |
Fuel Octane (fuel-octane-team) |
|
2017-04-14 00:57:40 |
Curtis Hovey |
fuel/7.0.x: assignee |
Registry Administrators (registry) |
|
|