Detailed bug description:
Problem discovered on MOS 9.2 ISO #607
Problem was discovered during execution of CinderVolumes.create_and_extend_volume with rally error "Rally tired waiting for Volume".
Rally:
TimeoutException: Rally tired waiting for Volume s_rally_9ad4f65c_cPtBGovw:d40d79d8-fb50-4da6-9a9e-07c175b20f55 to become ('AVAILABLE') current status CREATING
logs-node-4-10.20.0.5\node-4-10.20.0.5\var\log\cinder-all.log-20170124-1485267001
<159>Jan 24 11:14:09 node-4 cinder-scheduler: 2017-01-24 11:14:09.535 23906 DEBUG cinder.scheduler.manager [req-32eb8cc1-a9ed-4302-a7f2-9d05b6755e4e 51e1db9e01ed433980f8e06599a45a81 008cffff0f224203b0e8658a67b8e66a - - -] Task 'cinder.scheduler.flows.create_volume.ExtractSchedulerSpecTask;volume:create' (3c5217f0-fa2f-4549-83a5-52a265503ef1) transitioned into state 'SUCCESS' from state 'RUNNING' with result '{'request_spec': {u'source_replicaid': None, u'volume_properties': {u'status': u'creating', u'volume_type_id': None, u'snapshot_id': None, u'display_name': u's_rally_9ad4f65c_cPtBGovw', u'availability_zone': u'nova', u'reservations': [u'48c017bb-8c8b-40a7-b319-c4b382d5248d', u'2af2838c-5daf-464f-8447-bcac58072398'], u'multiattach': False, u'attach_status': u'detached', u'display_description': None, u'cgsnapshot_id': None, u'qos_specs': None, u'encryption_key_id': None, u'replication_status': u'disabled', u'source_replicaid': None, u'source_volid': None, u'user_id': u'51e1db9e01ed433980f8e06599a45a81', u'project_id': u'008cffff0f224203b0e8658a67b8e66a', u'consistencygroup_id': None, u'size': 1, u'metadata': {}}, u'source_volid': None, u'cgsnapshot_id': None, u'volume': {u'migration_status': None, u'provider_id': None, u'availability_zone': u'nova', u'terminated_at': None, u'updated_at': None, u'provider_geometry': None, u'ec2_id': None, u'replication_status': u'disabled', u'snapshot_id': None, u'replication_extended_status': None, u'deleted_at': None, u'id': u'd40d79d8-fb50-4da6-9a9e-07c175b20f55', u'size': 1, u'display_name': u's_rally_9ad4f65c_cPtBGovw', u'display_description': None, u'name_id': u'd40d79d8-fb50-4da6-9a9e-07c175b20f55', u'volume_admin_metadata': [], u'project_id': u'008cffff0f224203b0e8658a67b8e66a', u'launched_at': None, u'metadata': {}, u'status': u'creating', u'volume_type_id': None, u'multiattach': False, u'deleted': False, u'provider_location': None, u'volume_glance_metadata': [], u'host': None, u'consistencygroup_id': None, u'source_volid': None, u'provider_auth': None, u'previous_status': Non
logs-node-4-10.20.0.5\node-4-10.20.0.5\var\log\cinder\cinder-scheduler.log
<159>Jan 24 11:35:49 node-7 cinder-volume: 2017-01-24 11:35:49.007 20472 DEBUG cinder.volume.manager [req-32eb8cc1-a9ed-4302-a7f2-9d05b6755e4e 51e1db9e01ed433980f8e06599a45a81 008cffff0f224203b0e8658a67b8e66a - - -] Task 'cinder.volume.flows.manager.create_volume.ExtractVolumeRefTask;volume:create' (4e75225e-ed52-4b0b-ace2-d09594876361) transitioned into state 'SUCCESS' from state 'RUNNING' with result 'Volume(_name_id=None,admin_metadata={},attach_status='detached',availability_zone='nova',bootable=False,consistencygroup=<?>,consistencygroup_id=None,created_at=2017-01-24T11:14:09Z,deleted=False,deleted_at=None,display_description=None,display_name='s_rally_9ad4f65c_cPtBGovw',ec2_id=None,encryption_key_id=None,glance_metadata=<?>,host='rbd:volumes@RBD-backend#RBD-backend',id=d40d79d8-fb50-4da6-9a9e-07c175b20f55,launched_at=None,metadata={},migration_status=None,multiattach=False,previous_status=None,project_id=008cffff0f224203b0e8658a67b8e66a,provider_auth=None,provider_geometry=None,provider_id=None,provider_location=None,replication_driver_data=None,replication_extended_status=None,replication_status='disabled',scheduled_at=2017-01-24T11:14:10Z,size=1,snapshot_id=None,snapshots=<?>,source_volid=None,status='creating',terminated_at=None,updated_at=2017-01-24T11:14:10Z,user_id=51e1db9e01ed433980f8e06599a45a81,volume_attachment=<?>,volume_type=<?>,volume_type_id=None)' _task_receiver /usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:178
logs-node-7-10.20.0.3\node-7-10.20.0.3\var\log\cinder-all.log-20170124-1485264601:
<158>Jan 24 11:39:12 node-7 cinder-volume: 2017-01-24 11:39:12.823 20472 INFO cinder.volume.flows.manager.create_volume [req-32eb8cc1-a9ed-4302-a7f2-9d05b6755e4e 51e1db9e01ed433980f8e06599a45a81 008cffff0f224203b0e8658a67b8e66a - - -] Volume d40d79d8-fb50-4da6-9a9e-07c175b20f55: being created as raw with specification: {'status': u'creating', 'volume_size': 1, 'volume_name': 'volume-d40d79d8-fb50-4da6-9a9e-07c175b20f55'}
<159>Jan 24 11:39:12 node-7 cinder-volume: 2017-01-24 11:39:12.824 20472 DEBUG cinder.volume.drivers.rbd [req-32eb8cc1-a9ed-4302-a7f2-9d05b6755e4e 51e1db9e01ed433980f8e06599a45a81 008cffff0f224203b0e8658a67b8e66a - - -] creating volume 'volume-d40d79d8-fb50-4da6-9a9e-07c175b20f55' create_volume /usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py:545
<159>Jan 24 11:39:12 node-7 cinder-volume: 2017-01-24 11:39:12.824 20472 DEBUG cinder.volume.drivers.rbd [req-32eb8cc1-a9ed-4302-a7f2-9d05b6755e4e 51e1db9e01ed433980f8e06599a45a81 008cffff0f224203b0e8658a67b8e66a - - -] opening connection to ceph cluster (timeout=-1). _connect_to_rados /usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py:322
<158>Jan 24 11:39:12 node-7 cinder-volume: 2017-01-24 11:39:12.845 20472 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: Socket closed
<158>Jan 24 11:39:12 node-7 cinder-volume: 2017-01-24 11:39:12.853 20472 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: Socket closed
<158>Jan 24 11:39:12 node-7 cinder-volume: 2017-01-24 11:39:12.860 20472 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: Socket closed
<158>Jan 24 11:39:12 node-7 cinder-volume: 2017-01-24 11:39:12.867 20472 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: Socket closed
<158>Jan 24 11:39:12 node-7 cinder-volume: 2017-01-24 11:39:12.874 20472 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: Too many heartbeats missed
<158>Jan 24 11:39:12 node-7 cinder-volume: 2017-01-24 11:39:12.881 20472 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: Too many heartbeats missed
<156>Jan 24 11:39:12 node-7 cinder-volume: 2017-01-24 11:39:12.888 20472 WARNING cinder.db.sqlalchemy.api [req-ab3e55d6-700d-44b6-b741-690d60f119bf 7c9bb5b8409d47ab9a77c538c2d124a6 18afab553dcc4ce09b3f0e9a4fc430ba - - -] Deadlock detected when running 'reservation_commit': Retrying...
<156>Jan 24 11:39:12 node-7 cinder-volume: 2017-01-24 11:39:12.889 20472 WARNING cinder.db.sqlalchemy.api [req-831d07f3-2d3c-4f28-a913-a28695ca5fa3 7c9bb5b8409d47ab9a77c538c2d124a6 18afab553dcc4ce09b3f0e9a4fc430ba - - -] Deadlock detected when running 'reservation_commit': Retrying...
<159>Jan 24 11:39:12 node-7 cinder-volume: 2017-01-24 11:39:12.890 20472 DEBUG cinder.volume.drivers.rbd [req-8837adf1-27c0-4d07-9f43-685f3778a666 7c9bb5b8409d47ab9a77c538c2d124a6 18afab553dcc4ce09b3f0e9a4fc430ba - - -] opening connection to ceph cluster (timeout=-1). _connect_to_rados /usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py:322
<156>Jan 24 11:39:12 node-7 cinder-volume: 2017-01-24 11:39:12.907 20472 WARNING cinder.db.sqlalchemy.api [req-df6859d4-54a6-4116-a417-e9df071aa763 7c9bb5b8409d47ab9a77c538c2d124a6 18afab553dcc4ce09b3f0e9a4fc430ba - - -] Deadlock detected when running 'quota_reserve': Retrying...
<159>Jan 24 11:39:13 node-7 cinder-volume: 2017-01-24 11:39:13.075 20472 DEBUG cinder.volume.manager [req-3de8b6d7-8454-4262-82fb-352a068e9ae9 a2c9d8e5a54a45e798787b72f3a7b07f 008cffff0f224203b0e8658a67b8e66a - - -] Task 'cinder.volume.flows.manager.create_volume.ExtractVolumeRefTask;volume:create' (5274067f-f98a-4451-82fb-a19854b88122) transitioned into state 'SUCCESS' from state 'RUNNING' with result 'Volume(_name_id=None,admin_metadata={},attach_status='detached',availability_zone='nova',bootable=False,consistencygroup=<?>,consistencygroup_id=None,created_at=2017-01-24T11:24:12Z,deleted=False,deleted_at=None,display_description=None,display_name='s_rally_9ad4f65c_VUsYZ8RB',ec2_id=None,encryption_key_id=None,glance_metadata=<?>,host='rbd:volumes@RBD-backend#RBD-backend',id=3634815f-fd10-48f3-ac80-da5a8dc741d4,launched_at=None,metadata={},migration_status=None,multiattach=False,previous_status=None,project_id=008cffff0f224203b0e8658a67b8e66a,provider_auth=None,provider_geometry=None,provider_id=None,provider_location=None,replication_driver_data=None,replication_extended_status=None,replication_status='disabled',scheduled_at=2017-01-24T11:24:12Z,size=1,snapshot_id=None,snapshots=<?>,source_volid=None,status='creating',terminated_at=None,updated_at=2017-01-24T11:24:12Z,user_id=a2c9d8e5a54a45e798787b72f3a7b07f,volume_attachment=<?>,volume_type=<?>,volume_type_id=None)' _task_receiver /usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:178
<159>Jan 24 11:39:13 node-7 cinder-volume: 2017-01-24 11:39:13.079 20472 DEBUG cinder.volume.manager [req-3de8b6d7-8454-4262-82fb-352a068e9ae9 a2c9d8e5a54a45e798787b72f3a7b07f 008cffff0f224203b0e8658a67b8e66a - - -] Task 'cinder.volume.flows.manager.create_volume.OnFailureRescheduleTask;volume:create' (edce4709-07c9-4dc6-a0ca-515c57a91d4d) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:189
<159>Jan 24 11:39:13 node-7 cinder-volume: 2017-01-24 11:39:13.081 20472 DEBUG cinder.volume.manager [req-3de8b6d7-8454-4262-82fb-352a068e9ae9 a2c9d8e5a54a45e798787b72f3a7b07f 008cffff0f224203b0e8658a67b8e66a - - -] Task 'cinder.volume.flows.manager.create_volume.OnFailureRescheduleTask;volume:create' (edce4709-07c9-4dc6-a0ca-515c57a91d4d) transitioned into state 'SUCCESS' from state 'RUNNING' with result 'None' _task_receiver /usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:178
<159>Jan 24 11:39:13 node-7 cinder-volume: 2017-01-24 11:39:13.083 20472 DEBUG cinder.volume.manager [req-3de8b6d7-8454-4262-82fb-352a068e9ae9 a2c9d8e5a54a45e798787b72f3a7b07f 008cffff0f224203b0e8658a67b8e66a - - -] Task 'cinder.volume.flows.manager.create_volume.ExtractVolumeSpecTask;volume:create' (985264f5-800d-4986-b07d-b30ebbf4a564) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:189
<159>Jan 24 11:39:13 node-7 cinder-volume: 2017-01-24 11:39:13.085 20472 DEBUG cinder.volume.manager [req-3de8b6d7-8454-4262-82fb-352a068e9ae9 a2c9d8e5a54a45e798787b72f3a7b07f 008cffff0f224203b0e8658a67b8e66a - - -] Task 'cinder.volume.flows.manager.create_volume.ExtractVolumeSpecTask;volume:create' (985264f5-800d-4986-b07d-b30ebbf4a564) transitioned into state 'SUCCESS' from state 'RUNNING' with result '{'status': u'creating', 'volume_size': 1, 'volume_name': 'volume-3634815f-fd10-48f3-ac80-da5a8dc741d4', 'type': 'raw', 'volume_id': '3634815f-fd10-48f3-ac80-da5a8dc741d4'}' _task_receiver /usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:178
<159>Jan 24 11:39:13 node-7 cinder-volume: 2017-01-24 11:39:13.087 20472 DEBUG cinder.volume.manager [req-3de8b6d7-8454-4262-82fb-352a068e9ae9 a2c9d8e5a54a45e798787b72f3a7b07f 008cffff0f224203b0e8658a67b8e66a - - -] Task 'cinder.volume.flows.manager.create_volume.NotifyVolumeActionTask;volume:create, create.start' (f625903f-1767-43f6-9b71-5a41199f19df) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:189
<159>Jan 24 11:39:13 node-7 cinder-volume: 2017-01-24 11:39:13.091 20472 DEBUG oslo.messaging._drivers.impl_rabbit [-] Received recoverable error from kombu: on_error /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/impl_rabbit.py:688
2017-01-24 11:39:13.091 20472 ERROR oslo.messaging._drivers.impl_rabbit Traceback (most recent call last):
2017-01-24 11:39:13.091 20472 ERROR oslo.messaging._drivers.impl_rabbit File "/usr/lib/python2.7/dist-packages/kombu/connection.py", line 436, in _ensured
2017-01-24 11:39:13.091 20472 ERROR oslo.messaging._drivers.impl_rabbit return fun(*args, **kwargs)
2017-01-24 11:39:13.091 20472 ERROR oslo.messaging._drivers.impl_rabbit File "/usr/lib/python2.7/dist-packages/kombu/connection.py", line 508, in __call__
2017-01-24 11:39:13.091 20472 ERROR oslo.messaging._drivers.impl_rabbit return fun(*args, channel=channels[0], **kwargs), channels[0]
2017-01-24 11:39:13.091 20472 ERROR oslo.messaging._drivers.impl_rabbit File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/impl_rabbit.py", line 738, in execute_method
2017-01-24 11:39:13.091 20472 ERROR oslo.messaging._drivers.impl_rabbit method()
2017-01-24 11:39:13.091 20472 ERROR oslo.messaging._drivers.impl_rabbit File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/impl_rabbit.py", line 1034, in _consume
2017-01-24 11:39:13.091 20472 ERROR oslo.messaging._drivers.impl_rabbit self.connection.drain_events(timeout=poll_timeout)
2017-01-24 11:39:13.091 20472 ERROR oslo.messaging._drivers.impl_rabbit File "/usr/lib/python2.7/dist-packages/kombu/connection.py", line 275, in drain_events
2017-01-24 11:39:13.091 20472 ERROR oslo.messaging._drivers.impl_rabbit return self.transport.drain_events(self.connection, **kwargs)
2017-01-24 11:39:13.091 20472 ERROR oslo.messaging._drivers.impl_rabbit File "/usr/lib/python2.7/dist-packages/kombu/transport/pyamqp.py", line 95, in drain_events
2017-01-24 11:39:13.091 20472 ERROR oslo.messaging._drivers.impl_rabbit return connection.drain_events(**kwar
<155>Jan 24 11:39:13 node-7 cinder-volume: 2017-01-24 11:39:13.093 20472 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server 192.168.0.6:5673 closed the connection. Check login credentials: Socket closed
Steps to reproduce:
Start rally test CinderVolumes.create_and_extend_volume
Expected results:
Test passed.
Actual result:
Test failed.
Reproducibility:
Iterations: 6, Failures: 6
Workaround:
n/a
Impact:
unknown
Description of the environment:
* 10 virt nodes:
- CPU: 10 x 2.10 GHz
- Disks: 2 drives (SSD - 80 GB, HDD - 931.5 GB), 1006.0 GB total
- Memory: 2 x 16.0 GB x 10, 320.0 GB total
- NUMA topology: 1 NUMA node
* Node roles:
- 3 controllers
- 7 computes
- 7 Ceph OSD
* Details:
- OS on controllers: Mitaka on Ubuntu 14.04
- OS on computes: Red Hat 4.8.3-9
- Compute: KVM
- Neutron with VLAN segmentation
- Ceph RBD for volumes (Cinder)
- Ceph RadosGW for objects (Swift API)
- Ceph RBD for ephemeral volumes (Nova)
- Ceph RBD for images (Glance)
Timmy env snapshot url: http://mos-scale-share.mirantis.com/fuel-snapshot-2017-01-26-long/
Could you provide rally json settings file, which you used for these tests?