- Boot an instance from a volume (ceph backend)
- Create a snapshot of the volume
- Reset state of this volume to available
- Try to revert to the latest snapshot.
Alternatively, there could be some delays/problems with cinder services, e.g. timeouts which could result to the volume which can be reverted from the point of view of the cinder API (state was reset to available), but such revert would destroy data.
2. Un fact there are no problems in the Cinder logs itself. All problems are located within backend. The source of the problem is that it is unacceptable to rollback image with watchers while there are not any check if the watchers are present.
I will try to find ceph logs indicating this problem.
Greetings Sofia Enriquez,
1.Steps are for example the following:
- Boot an instance from a volume (ceph backend)
- Create a snapshot of the volume
- Reset state of this volume to available
- Try to revert to the latest snapshot.
Alternatively, there could be some delays/problems with cinder services, e.g. timeouts which could result to the volume which can be reverted from the point of view of the cinder API (state was reset to available), but such revert would destroy data.
2. Un fact there are no problems in the Cinder logs itself. All problems are located within backend. The source of the problem is that it is unacceptable to rollback image with watchers while there are not any check if the watchers are present.
I will try to find ceph logs indicating this problem.