2) Our initial cloud setup creates a standard flat external network named ext_net and a private gre network. I don't know if that makes a difference or not.
openstack network list
+--------------------------------------+---------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+---------+--------------------------------------+
| 2b28b833-7be9-4dde-afdd-82eccf05955f | ext_net | 34ca55db-e5af-4706-b810-4f128d34bcef |
| ed9728c0-0744-4f14-acd1-420d90de6d27 | private | ea7a07d5-8df4-4977-afd3-f61010ae38b3 |
+--------------------------------------+---------+--------------------------------------+
Here are the steps I used to reproduce this, using juju but of course it should be simple enough to recreate with other deployment tooling.
1) juju-deployer -d -c default.yaml xenial-mitaka (sorry, the bundle is for the old juju-deployer tool rather than the native juju bundles)
See attachment for default.yaml bundle.
This bundle ensures /etc/neutron/ plugins/ ml2/openvswitch _agent. ini on compute nodes is using firewall_ driver= openvswitch:
[securitygroup] security_ group = True
enable_
firewall_driver = openvswitch
2) Our initial cloud setup creates a standard flat external network named ext_net and a private gre network. I don't know if that makes a difference or not.
openstack network list ------- ------- ------- ------- ----+-- ------- +------ ------- ------- ------- ------- ----+ ------- ------- ------- ------- ----+-- ------- +------ ------- ------- ------- ------- ----+ 7be9-4dde- afdd-82eccf0595 5f | ext_net | 34ca55db- e5af-4706- b810-4f128d34bc ef | 0744-4f14- acd1-420d90de6d 27 | private | ea7a07d5- 8df4-4977- afd3-f61010ae38 b3 | ------- ------- ------- ------- ----+-- ------- +------ ------- ------- ------- ------- ----+
+------
| ID | Name | Subnets |
+------
| 2b28b833-
| ed9728c0-
+------
(clients) ubuntu@ coreycb- bastion: ~/openstack- charm-testing$ openstack subnet list ------- ------- ------- ------- ----+-- ------- ------- +------ ------- ------- ------- ------- ----+-- ------- ------- -+ ------- ------- ------- ------- ----+-- ------- ------- +------ ------- ------- ------- ------- ----+-- ------- ------- -+ e5af-4706- b810-4f128d34bc ef | ext_net_subnet | 2b28b833- 7be9-4dde- afdd-82eccf0595 5f | 10.5.0.0/16 | 8df4-4977- afd3-f61010ae38 b3 | private_subnet | ed9728c0- 0744-4f14- acd1-420d90de6d 27 | 192.168.21.0/24 | ------- ------- ------- ------- ----+-- ------- ------- +------ ------- ------- ------- ------- ----+-- ------- ------- -+
+------
| ID | Name | Network | Subnet |
+------
| 34ca55db-
| ea7a07d5-
+------
3)
openstack security group create sec_group_A
openstack security group create sec_group_B
openstack security group rule create --ingress --proto tcp --dst-port 5682:5682 --remote-ip 0.0.0.0/0 sec_group_A
openstack security group rule create --ingress --proto tcp --dst-port 5672:5672 --remote-group sec_group_A sec_group_B
openstack security group rule create --ingress --proto tcp --remote-group sec_group_A sec_group_B
4) create an instance using sec_group_B
openstack server create x1 --image xenial --flavor m1.tempest --nic net-id=`openstack network list | grep private | awk '{ print $2 }'` --security-group sec_group_B
5) See looping traceback in the compute node that the server gets scheduled to in /var/log/ neutron/ neutron- openvswitch- agent.log (see attachement)