Comment 3 for bug 1607790

Revision history for this message
Pascal Mazon (pascal-mazon-u) wrote : Re: neutron-plugin-openvswitch-agent should be running but is not

Actually the "neutron-plugin-openvswitch-agent" service does not exist.
However, the "neutron-openvswitch-agent" service is started:

root@controller:~# systemctl status neutron-plugin-openvswitch-agent
● neutron-plugin-openvswitch-agent.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)
root@controller:~# systemctl status neutron-openvswitch-agent
● neutron-openvswitch-agent.service - Openstack Neutron Open vSwitch Plugin Agent
   Loaded: loaded (/lib/systemd/system/neutron-openvswitch-agent.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2016-07-28 10:38:32 UTC; 5 days ago
 Main PID: 17963 (neutron-openvsw)
    Tasks: 1
   Memory: 36.8M
      CPU: 4min 59.762s

Regarding the nova show, here it comes:

+--------------------------------------+-------------------------------------------------+
| Property | Value |
+--------------------------------------+-------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | error |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2016-08-03T08:05:25Z |
| fault | {"message": "404 Not Found |
| | |
| | The resource could not be found. |
| | |
| | |
| | Neutron server returns request_ids: ['req-0e122380-69c8-4d9a-ac4f-c0e099c40415']", "code": 500, "details": " File \"/usr/lib/python2.7/dist-packages/nova/compute/manager.py\", line 1926, in _do_build_and_run_instance |
| | filter_properties) |
| | File \"/usr/lib/python2.7/dist-packages/nova/compute/manager.py\", line 2116, in _build_and_run_instance |
| | instance_uuid=instance.uuid, reason=six.text_type(e)) |
| | ", "created": "2016-08-03T08:05:46Z"} |
| flavor | m1.teeny (cdc244e9-44d5-46cb-9c65-353f72a5cdc7) |
| hostId | |
| id | 9cd8c46c-519c-4dcb-84e7-d86b53a2be68 |
| image | trusty (56044354-1374-4157-a577-fcc7354bea69) |
| key_name | maas |
| metadata | {} |
| name | test-server1 |
| os-extended-volumes:volumes_attached | [] |
| status | ERROR |
| tenant_id | ea160e8920a049cc8912c0852a4c939d |
| updated | 2016-08-03T08:05:46Z |
| user_id | 4e85ac76c4ae4332a96815b51cef7141 |
+--------------------------------------+-------------------------------------------------+

It says it got an issue with neutron. Which is consistent with the juju neutron-gateway blocked status, I guess.

I provide in attachment logs from nova-compute/0 and neutron-gateway/0

As a complement, here are the commands I typed on my compute-node (a little faster than the controller, as it's not in an lxc container):

cd
scp root@$maas_ip:/root/openstackrc .
. ./openstackrc

# set up basic network config
neutron net-create private
neutron subnet-create --name private_subnet private 11.0.0.0/24 \
  --allocation-pool start=11.0.0.2,end=11.0.0.100
neutron router-create router1
neutron router-interface-add router1 private_subnet

# set up key-pair
scp root@$maas_ip:/var/lib/maas/.ssh/id_rsa.pub .
nova keypair-add --pub-key id_rsa.pub maas

# set up flavor
nova flavor-create m1.teeny auto 1024 5 1

# set up image
scp -C root@10.19.0.200:~/ubuntu-14.04-cloud-last.qcow2 .
glance image-create --name trusty --disk-format qcow2 --container-format bare \
  --file ubuntu-14.04-cloud-last.qcow2

# launch instance
net_id=$(neutron net-list | grep private | awk '{ print $2 }')

nova boot test-server1 \
  --availability-zone nova:compute1 \
  --nic net-id=$net_id \
  --flavor m1.teeny \
  --image trusty \
  --key-name maas