After a little more digging, I believe part of my issue with the boto3+s3 option was that I was still using the full URL rather than just the bucket name, so rather than:
So this appears to be working and I'm seeing files show up in S3 and nothing has produced an error so far after six hours (and counting) of backing up.
Although it's still weird that the old S3 URL of s3://s3-us-west-2.amazonaws.com/BUCKET_NAME/folder that works in the 0.7.19 version of Duplicity with Python 2.7 was able to use the old S3 URL, but the new one on 0.8.19 (and 0.8.11) didn't seem to jive with it.
But since the boto3+s3://bucket_name works with the multipart chunking, I'll just stick with this instead.
After a little more digging, I believe part of my issue with the boto3+s3 option was that I was still using the full URL rather than just the bucket name, so rather than:
boto3+s3: //BUCKET_ NAME/folder
I was still using:
boto3+s3: //s3-us- west-2. amazonaws. com/BUCKET_ NAME/folder
So this appears to be working and I'm seeing files show up in S3 and nothing has produced an error so far after six hours (and counting) of backing up.
Although it's still weird that the old S3 URL of s3://s3- us-west- 2.amazonaws. com/BUCKET_ NAME/folder that works in the 0.7.19 version of Duplicity with Python 2.7 was able to use the old S3 URL, but the new one on 0.8.19 (and 0.8.11) didn't seem to jive with it.
But since the boto3+s3: //bucket_ name works with the multipart chunking, I'll just stick with this instead.