Steps to reproduce:
1.Create and deploy next cluster - Neutron Vlan, cinder/swift, 3 controller, 2 compute, 1 cinder nodes
2.Run OSTF
3.Verify network
4.Simulate network outage:
For all networks except "admin":
- Locate bridge associated with needed network: "virsh net-dumpxml <network_name>"
- Find all interfaces attached to the bridge using "brctl show <bridge_name>", remember it
- Destroy network using virsh net-destroy
5. Fix network connection after 5minute pause:
For all networks except "admin":
-Restore network using "virsh net-start <network_name>"
Attach all interfaces to bridges according to data step 6 using "brctl addif <bridge> <iface>"
6.Wait until OSTF 'HA' suite passes (FAIL)
Expected results:
All steps OK
Actual result:
Step #6 fails:
Time limit exceeded
root@node-1:~# haproxy-status.sh | grep DOWN
root@node-1:~# crm status
Last updated: Tue May 24 09:27:23 2016 Last change: Mon May 23 16:27:07 2016 by root via cibadmin on node-2.test.domain.local
Stack: corosync
Current DC: node-1.test.domain.local (version 1.1.14-70404b0) - partition with quorum
3 nodes and 46 resources configured
Detailed bug description: 0-mos-376- 2016-05- 19_18-18- 59.iso
fuel-9.
Steps to reproduce:
1.Create and deploy next cluster - Neutron Vlan, cinder/swift, 3 controller, 2 compute, 1 cinder nodes
2.Run OSTF
3.Verify network
4.Simulate network outage:
For all networks except "admin":
- Locate bridge associated with needed network: "virsh net-dumpxml <network_name>"
- Find all interfaces attached to the bridge using "brctl show <bridge_name>", remember it
- Destroy network using virsh net-destroy
5. Fix network connection after 5minute pause:
For all networks except "admin":
-Restore network using "virsh net-start <network_name>"
Attach all interfaces to bridges according to data step 6 using "brctl addif <bridge> <iface>"
6.Wait until OSTF 'HA' suite passes (FAIL)
Expected results:
All steps OK
Actual result:
Step #6 fails:
Time limit exceeded
root@node-1:~# haproxy-status.sh | grep DOWN test.domain. local test.domain. local (version 1.1.14-70404b0) - partition with quorum
root@node-1:~# crm status
Last updated: Tue May 24 09:27:23 2016 Last change: Mon May 23 16:27:07 2016 by root via cibadmin on node-2.
Stack: corosync
Current DC: node-1.
3 nodes and 46 resources configured
Online: [ node-1. test.domain. local node-2. test.domain. local node-3. test.domain. local ]
Clone Set: clone_p_vrouter [p_vrouter] test.domain. local node-2. test.domain. local node-3. test.domain. local ] ns_IPaddr2) : Started node-1. test.domain. local ns_IPaddr2) : Started node-2. test.domain. local ns_IPaddr2) : Started node-2. test.domain. local ns_IPaddr2) : Started node-3. test.domain. local test.domain. local node-2. test.domain. local node-3. test.domain. local ] node-3. test.domain. local (ocf::pacemaker :SysInfo) : Started node-3. test.domain. local node-2. test.domain. local (ocf::pacemaker :SysInfo) : Started node-2. test.domain. local test.domain. local ] test.domain. local node-3. test.domain. local ] test.domain. local node-2. test.domain. local node-3. test.domain. local ] p_rabbitmq- server [p_rabbitmq-server] test.domain. local ] test.domain. local node-3. test.domain. local ] test.domain. local node-2. test.domain. local node-3. test.domain. local ] node-1. test.domain. local (ocf::pacemaker :SysInfo) : Started node-1. test.domain. local openvswitch- agent [neutron- openvswitch- agent] test.domain. local node-2. test.domain. local node-3. test.domain. local ] l3-agent [neutron-l3-agent] test.domain. local node-2. test.domain. local node-3. test.domain. local ] test.domain. local node-2. test.domain. local node-3. test.domain. local ] metadata- agent [neutron- metadata- agent] test.domain. local node-2. test.domain. local node-3. test.domain. local ] dhcp-agent [neutron- dhcp-agent] test.domain. local node-2. test.domain. local node-3. test.domain. local ] test.domain. local node-2. test.domain. local node-3. test.domain. local ] vip__public [ping_vip__public] test.domain. local node-2. test.domain. local node-3. test.domain. local ] messaging- node-1' ...
Started: [ node-1.
vip__management (ocf::fuel:
vip__vrouter_pub (ocf::fuel:
vip__vrouter (ocf::fuel:
vip__public (ocf::fuel:
Clone Set: clone_p_haproxy [p_haproxy]
Started: [ node-1.
sysinfo_
sysinfo_
Master/Slave Set: master_p_conntrackd [p_conntrackd]
Masters: [ node-2.
Slaves: [ node-1.
Clone Set: clone_p_mysqld [p_mysqld]
Started: [ node-1.
Master/Slave Set: master_
Masters: [ node-1.
Slaves: [ node-2.
Clone Set: clone_p_dns [p_dns]
Started: [ node-1.
sysinfo_
Clone Set: clone_neutron-
Started: [ node-1.
Clone Set: clone_neutron-
Started: [ node-1.
Clone Set: clone_p_heat-engine [p_heat-engine]
Started: [ node-1.
Clone Set: clone_neutron-
Started: [ node-1.
Clone Set: clone_neutron-
Started: [ node-1.
Clone Set: clone_p_ntp [p_ntp]
Started: [ node-1.
Clone Set: clone_ping_
Started: [ node-1.
root@node-1:~# rabbitmqctl cluster_status
Cluster status of node 'rabbit@
< command hangs in this state >