Creating instance was error on Ubuntu18.04 Bionic

Bug #1815272 reported by Rikimaru Honjo
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
nova-lxd
Fix Committed
Undecided
Unassigned

Bug Description

[Error detail]
I built an openstack environment with nova-lxd by devstack on Ubuntu 18.04 Bionic.

Running stack.sh was succseeded. But instance's status had been error when I created a container instance.
The following error was output to nova-compute log at that time.

Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [None req-cab4b18e-bf40-4802-a84d-a5cc35d0f390 demo admin] [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] Instance failed to spawn: LXDAPIException: No storage pool found. Please
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] Traceback (most recent call last):
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] File "/opt/stack/nova/nova/compute/manager.py", line 2379, in _build_resources
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] yield resources
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] File "/opt/stack/nova/nova/compute/manager.py", line 2142, in _build_and_run_instance
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] block_device_info=block_device_info)
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] File "/opt/stack/nova-lxd/nova/virt/lxd/driver.py", line 582, in spawn
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] context, instance, network_info, block_device_info)
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] self.force_reraise()
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] six.reraise(self.type_, self.value, self.tb)
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] File "/opt/stack/nova-lxd/nova/virt/lxd/driver.py", line 578, in spawn
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] container_config, wait=True)
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] File "/usr/local/lib/python2.7/dist-packages/pylxd/models/container.py", line 276, in create
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] response = client.api.containers.post(json=config, target=target)
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] File "/usr/local/lib/python2.7/dist-packages/pylxd/client.py", line 168, in post
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] self._assert_response(response, allowed_status_codes=(200, 201, 202))
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] File "/usr/local/lib/python2.7/dist-packages/pylxd/client.py", line 108, in _assert_response
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] raise exceptions.LXDAPIException(response)
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332] LXDAPIException: No storage pool found. Please create a new storage pool
Feb 09 06:58:04 bio-lxd-1 nova-compute[10990]: ERROR nova.compute.manager [instance: 7dc5e8d7-d39d-44e1-90ee-6816fb9a9332]

[How to solve this error]
I added the following setting to my local.conf.
As a result, this error was solved.

LXD_BACKEND_DRIVER=zfs

In my understanding, LXD 3.0 requires this setting.

Revision history for this message
Alex Kavanagh (ajkavanagh) wrote :
Changed in nova-lxd:
status: New → Invalid
Revision history for this message
Alex Kavanagh (ajkavanagh) wrote :

Marking as invalid as it's informational rather than a bug. Thank you for the patch to add additional documentation to the project.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova-lxd (master)

Reviewed: https://review.openstack.org/636001
Committed: https://git.openstack.org/cgit/openstack/nova-lxd/commit/?id=6693f08db68626e27033a66e9751952be75ce7bc
Submitter: Zuul
Branch: master

commit 6693f08db68626e27033a66e9751952be75ce7bc
Author: Rikimaru Honjo <email address hidden>
Date: Sat Feb 9 08:13:46 2019 +0000

    Improve description about installing with devstack

    LXD_BACKEND_DRIVER=zfs should be specified in local.conf if LXD
    version is 3.0.
    In addition, LXD_BACKEND_DRIVER=zfs requires zfs 0.7.0 or higher.
    This patch adds those information to README and local.conf.sammple.

    Change-Id: I1692aefd2c4e8daba57629c5f99559ec9593fa5d
    Closes-Bug: #1815272
    Closes-Bug: #1815273

Changed in nova-lxd:
status: Invalid → Fix Committed
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.