------------------ Original ------------------
From: "weiguo sun"<email address hidden>;
Date: Tue, Oct 17, 2017 08:12 AM
To: "xiaojun.liao"<email address hidden>;
Subject: [Bug 1578036] Re: ceph incremental backup fails in mitaka
I am still seeing the error with the above commit
(https://review.openstack.org/511232)for "in-use" volume with ceph
volume & backup backend. The 2nd issue pointed out by Gaudenz in post#16
doesn't seem be resolved where ceph driver is trying to perform the
differential export with a snapshot created against the original ceph
volume instead of the snap-clone of the original ceph volume returned by
"get_backup_device" in "cinder/backup/manager.py".
When "rbd export-diff --id xxx --conf /tmp/tmpN0XVXO --pool xxx cinder-
pool-01/volume-origina-
<email address hidden> -"
is attemped, it will fail with "Numerical argument out of domain" since
the snapshot
(backup.737df596-3922-4f60-8028-81e22a22a57f.snap.1507841205.47)
generated by "source_rbd_image.create_snap(new_snap)" is against the
snap-clone instead.
I am testing this against our Newton tree but I don't see any code in
the latest master branch addressing this mix-up issue.
Status in Cinder:
New
Status in os-brick:
Fix Committed
Bug description:
When I try to backup volume (Ceph backend) via "cinder backup" to 2nd
Ceph cluster cinder create a full backup each time instead diff
backup.
mitaka release
cinder-backup 2:8.0.0-0ubuntu1 all Cinder storage service - Scheduler server
cinder-common 2:8.0.0-0ubuntu1 all Cinder storage service - common files
cinder-volume 2:8.0.0-0ubuntu1 all Cinder storage service - Volume server
python-cinder 2:8.0.0-0ubuntu1 all Cinder Python libraries
and what I have in Ceph backup cluster:
rbd --cluster bak -p backups du
volume-a3bacaf5-6cf8-480d-a5db-5ecdf4223b6a.backup.37cddcbf-4a18-4f44-927d-5e925b37755f 1024M 1024M
volume-a3bacaf5-6cf8-480d-a5db-5ecdf4223b6a.backup.55e5c1a3-8c0c-4912-b98a-1ea7e6396f85 1024M 1024M
I have some time recently, let me see.
------------------ Original ------------------ liao"<email address hidden>;
From: "weiguo sun"<email address hidden>;
Date: Tue, Oct 17, 2017 08:12 AM
To: "xiaojun.
Subject: [Bug 1578036] Re: ceph incremental backup fails in mitaka
I am still seeing the error with the above commit /review. openstack. org/511232) for "in-use" volume with ceph backup/ manager. py".
(https:/
volume & backup backend. The 2nd issue pointed out by Gaudenz in post#16
doesn't seem be resolved where ceph driver is trying to perform the
differential export with a snapshot created against the original ceph
volume instead of the snap-clone of the original ceph volume returned by
"get_backup_device" in "cinder/
When "rbd export-diff --id xxx --conf /tmp/tmpN0XVXO --pool xxx cinder- volume- origina- 737df596- 3922-4f60- 8028-81e22a22a5 7f.snap. 1507841205. 47) rbd_image. create_ snap(new_ snap)" is against the
pool-01/
<email address hidden> -"
is attemped, it will fail with "Numerical argument out of domain" since
the snapshot
(backup.
generated by "source_
snap-clone instead.
I am testing this against our Newton tree but I don't see any code in
the latest master branch addressing this mix-up issue.
-- /bugs.launchpad .net/bugs/ 1578036
You received this bug notification because you are a bug assignee.
https:/
Title:
ceph incremental backup fails in mitaka
Status in Cinder:
New
Status in os-brick:
Fix Committed
Bug description:
When I try to backup volume (Ceph backend) via "cinder backup" to 2nd
Ceph cluster cinder create a full backup each time instead diff
backup.
mitaka release
cinder-backup 2:8.0.0-0ubuntu1 all Cinder storage service - Scheduler server
cinder-common 2:8.0.0-0ubuntu1 all Cinder storage service - common files
cinder-volume 2:8.0.0-0ubuntu1 all Cinder storage service - Volume server
python-cinder 2:8.0.0-0ubuntu1 all Cinder Python libraries
My steps are: 6cf8-480d- a5db-5ecdf4223b 6a 6cf8-480d- a5db-5ecdf4223b 6a
1. cinder backup-create a3bacaf5-
2. cinder backup-create --incremental --force a3bacaf5-
and what I have in Ceph backup cluster: a3bacaf5- 6cf8-480d- a5db-5ecdf4223b 6a.backup. 37cddcbf- 4a18-4f44- 927d-5e925b3775 5f 1024M 1024M a3bacaf5- 6cf8-480d- a5db-5ecdf4223b 6a.backup. 55e5c1a3- 8c0c-4912- b98a-1ea7e6396f 85 1024M 1024M
rbd --cluster bak -p backups du
volume-
volume-
To manage notifications about this bug go to: /bugs.launchpad .net/cinder/ +bug/1578036/ +subscriptions
https:/