test_update_access_server_address failing in check-tempest-dsvm-postgres-full
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
New
|
Undecided
|
Unassigned |
Bug Description
Didn't see an existing bug for this but needed to recheck against it:
http://
2013-12-05 01:01:31.307 | Traceback (most recent call last):
2013-12-05 01:01:31.307 | File "tempest/
2013-12-05 01:01:31.307 | resp, server = self.create_
2013-12-05 01:01:31.308 | File "tempest/
2013-12-05 01:01:31.308 | server['id'], kwargs[
2013-12-05 01:01:31.308 | File "tempest/
2013-12-05 01:01:31.308 | extra_timeout=
2013-12-05 01:01:31.308 | File "tempest/
2013-12-05 01:01:31.309 | raise exceptions.
2013-12-05 01:01:31.309 | BuildErrorExcep
I do see this in the nova scheduler log:
2013-12-05 00:41:45.588 INFO nova.filters [req-29e72655-
2013-12-05 00:41:45.588 WARNING nova.scheduler.
Looks like the ComputeFilter couldn't find a host so it bombs out. I'm not sure why the timestamps don't line up with the failure in console.html.
Looks like this has been showing up since at least 11/22:
The ComputeFilter itself hasn't changed since 8/21, so nothing interesting there.
This nova scheduling fix merged on 11/21 but I'm not sure why/how that would cause this:
https:/
Seeing this right before it fails, indicating the host is down:
2013-12-05 00:41:45.588 DEBUG nova.scheduler. filters. compute_ filter [req-29e72655- eb62-44a9- 8149-e031de33a8 5f ServersTestJSON -tempest- 2138013281- user ServersTestJSON -tempest- 2138013281- tenant] (devstack- precise- check-rax- ord-791311. slave.openstack .org, devstack- precise- check-rax- ord-791311. slave.openstack .org) ram:6930 disk:297984 io_ops:1 instances:7 is disabled or has not been heard from in a while host_passes /opt/stack/ new/nova/ nova/scheduler/ filters/ compute_ filter. py:44
Phil Day has a patch related to that message: https:/ /review. openstack. org/#/c/ 58118/