Activity log for bug #1714248

Date Who What changed Old value New value Message
2017-08-31 13:30:42 Hironori Shiina bug added bug
2017-08-31 21:03:36 Matt Riedemann tags ironic placement
2017-08-31 23:50:26 Hironori Shiina description Description =========== In an environment where there are multiple compute nodes with ironic driver, when a compute node goes down, another compute node cannot take over ironic nodes. Steps to reproduce ================== 1. Start multiple compute nodes with ironic driver. 2. Register one node to ironic. 2. Stop a compute node which manages the ironic node. 3. Create an instance. Expected result =============== The instance creation is failed. Actual result ============= The instance is created. Environment =========== 1. Exact version of OpenStack you are running. openstack-nova-scheduler-15.0.6-2.el7.noarch openstack-nova-console-15.0.6-2.el7.noarch python2-novaclient-7.1.0-1.el7.noarch openstack-nova-common-15.0.6-2.el7.noarch openstack-nova-serialproxy-15.0.6-2.el7.noarch openstack-nova-placement-api-15.0.6-2.el7.noarch python-nova-15.0.6-2.el7.noarch openstack-nova-novncproxy-15.0.6-2.el7.noarch openstack-nova-api-15.0.6-2.el7.noarch openstack-nova-conductor-15.0.6-2.el7.noarch 2. Which hypervisor did you use? ironic Details ======= When a nova-compute goes down, another nova-compute will take over ironic nodes managed by the failed nova-compute by re-balancing a hash-ring. Then the active nova-compute tries creating a new resource provider with a new ComputeNode object UUID and the hypervisor name (ironic node name)[1][2][3]. This creation fails with a conflict(409) since there is a resource provider with the same name created by the failed nova-compute. When a new instance is requested, the scheduler gets only an old resource provider for the ironic node[4]. Then, the ironic node is not selected: WARNING nova.scheduler.filters.compute_filter [req-a37d68b5-7ab1-4254-8698-502304607a90 7b55e61a07304f9cab1544260dcd3e41 e21242f450d948d7af2650ac9365ee36 - - -] (compute02, 8904aeeb-a35b-4ba3-848a-73269fdde4d3) ram: 4096MB disk: 849920MB io_ops: 0 instances: 0 has not been heard from in a while [1] https://github.com/openstack/nova/blob/stable/ocata/nova/compute/resource_tracker.py#L464 [2] https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/client/report.py#L630 [3] https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/client/report.py#L410 [4] https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/filter_scheduler.py#L183 Description =========== In an environment where there are multiple compute nodes with ironic driver, when a compute node goes down, another compute node cannot take over ironic nodes. Steps to reproduce ================== 1. Start multiple compute nodes with ironic driver. 2. Register one node to ironic. 2. Stop a compute node which manages the ironic node. 3. Create an instance. Expected result =============== The instance is created. Actual result ============= The instance creation is failed. Environment =========== 1. Exact version of OpenStack you are running. openstack-nova-scheduler-15.0.6-2.el7.noarch openstack-nova-console-15.0.6-2.el7.noarch python2-novaclient-7.1.0-1.el7.noarch openstack-nova-common-15.0.6-2.el7.noarch openstack-nova-serialproxy-15.0.6-2.el7.noarch openstack-nova-placement-api-15.0.6-2.el7.noarch python-nova-15.0.6-2.el7.noarch openstack-nova-novncproxy-15.0.6-2.el7.noarch openstack-nova-api-15.0.6-2.el7.noarch openstack-nova-conductor-15.0.6-2.el7.noarch 2. Which hypervisor did you use? ironic Details ======= When a nova-compute goes down, another nova-compute will take over ironic nodes managed by the failed nova-compute by re-balancing a hash-ring. Then the active nova-compute tries creating a new resource provider with a new ComputeNode object UUID and the hypervisor name (ironic node UUID)[1][2][3]. This creation fails with a conflict(409) since there is a resource provider with the same name created by the failed nova-compute. When a new instance is requested, the scheduler gets only an old resource provider for the ironic node[4]. Then, the ironic node is not selected: WARNING nova.scheduler.filters.compute_filter [req-a37d68b5-7ab1-4254-8698-502304607a90 7b55e61a07304f9cab1544260dcd3e41 e21242f450d948d7af2650ac9365ee36 - - -] (compute02, 8904aeeb-a35b-4ba3-848a-73269fdde4d3) ram: 4096MB disk: 849920MB io_ops: 0 instances: 0 has not been heard from in a while [1] https://github.com/openstack/nova/blob/stable/ocata/nova/compute/resource_tracker.py#L464 [2] https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/client/report.py#L630 [3] https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/client/report.py#L410 [4] https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/filter_scheduler.py#L183
2017-09-01 15:52:12 Chris Dent bug added subscriber Chris Dent
2017-09-05 19:13:56 Sean Dague nova: status New Confirmed
2017-09-05 19:14:02 Sean Dague nova: importance Undecided High
2017-09-13 09:41:11 Mark Goddard bug added subscriber John Garbutt
2017-09-13 09:41:17 Mark Goddard bug added subscriber Mark Goddard
2017-09-28 16:25:25 Eric Fried bug added subscriber Eric Fried
2017-09-29 13:59:29 John Garbutt nova: assignee John Garbutt (johngarbutt)
2017-09-29 13:59:31 John Garbutt nova: status Confirmed In Progress
2017-12-11 12:13:13 Dmitry Tantsur bug task added ironic
2017-12-11 12:13:20 Dmitry Tantsur ironic: status New Triaged
2017-12-11 12:13:25 Dmitry Tantsur ironic: importance Undecided Critical
2017-12-11 12:27:26 OpenStack Infra nova: assignee John Garbutt (johngarbutt) Dmitry Tantsur (divius)
2017-12-11 12:29:52 Dmitry Tantsur ironic: status Triaged In Progress
2017-12-11 12:29:56 Dmitry Tantsur ironic: assignee Dmitry Tantsur (divius)
2017-12-11 21:24:16 OpenStack Infra nova: assignee Dmitry Tantsur (divius) Matt Riedemann (mriedem)
2017-12-12 14:53:37 Matt Riedemann nova: assignee Matt Riedemann (mriedem) John Garbutt (johngarbutt)
2017-12-12 14:54:00 Matt Riedemann ironic: status In Progress Invalid
2017-12-12 14:54:05 Matt Riedemann nominated for series nova/pike
2017-12-12 14:54:05 Matt Riedemann bug task added nova/pike
2017-12-12 14:54:13 Matt Riedemann nova/pike: status New In Progress
2017-12-12 14:54:16 Matt Riedemann nova/pike: importance Undecided High
2017-12-12 14:54:22 Matt Riedemann nova/pike: assignee John Garbutt (johngarbutt)
2017-12-12 15:11:23 OpenStack Infra nova/pike: assignee John Garbutt (johngarbutt) Matt Riedemann (mriedem)
2017-12-13 13:41:15 OpenStack Infra nova: status In Progress Fix Released
2018-03-30 17:46:46 OpenStack Infra nova/pike: status In Progress Fix Committed
2018-10-03 14:46:02 Matt Riedemann nominated for series nova/ocata
2018-10-03 14:46:02 Matt Riedemann bug task added nova/ocata
2018-10-03 14:46:10 Matt Riedemann nova/ocata: status New Confirmed
2018-10-03 14:46:13 Matt Riedemann nova/ocata: importance Undecided High
2018-10-03 16:11:22 OpenStack Infra nova/ocata: status Confirmed In Progress
2018-10-03 16:11:22 OpenStack Infra nova/ocata: assignee Jay Pipes (jaypipes)
2018-10-04 21:32:52 OpenStack Infra nova/ocata: status In Progress Fix Committed