bulk extract-archive doesn't seem to work for non-admin user
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Object Storage (swift) |
Fix Released
|
Medium
|
Unassigned |
Bug Description
When performing the bulk extract-archive operation, it appears that the action only functions properly if the user is an admin user. Attempting to perform this action with a user that has the ReadWrite role ACL privileges fails with a 403 Forbidden on each file in the archive. NOTE: This same user can write/delete normal objects into the container successfully.
Here are details from a question I asked on this issue:
I'm trying to figure out if the bulk.py extract-archive feature is supposed to work for non-admin accounts that have the X-Container-Write privilege? I realize that if the extract-archive action would attempt to create a container, this would require that the action be performed by an admin account, but what if the extraction is to occur into an existing container with the proper X-Container-Write ACL set?
Anyone have any information to this effect?
Here's what I'm seeing:
1. If I authenticate as a service admin user, all bulk operations succeed (and as an admin, I appear to be immune to container quota values. Not sure if this is correct, but it makes sense).
2. If I authenticate as a user that is part of a ReadWrite role that is assigned to the X-Container-Write ACL, I get 403 Forbidden errors when attempting to auto-extract the archive either into an existing container or by trying to create the container as part of the extraction (I would expect this scenario to fail during the container creation, so that is okay).
What I've verified is that the keystoneauth authorize function in a normal upload is hit twice. Once by the proxy.server (where there are no role definitions returned by the getattr(req, 'acl', None) call), and then again by the proxy.controlle
When doing the auto-extract though, for each file that is attempted to be extracted, only the first call by the proxy.server class seems to hit the keystone.
I'm at a loss as to this difference in behavior other than the fact that bulk.py is making subrequests using Request.blank(). Is something getting lost in the mix?
The proxy-server.conf is provided. It has been sanitized a bit, but should represent what's in place.
[DEFAULT]
bind_port = 8080
bind_ip = xxx.xxx.xxx.xxx
user = swift
swift_dir = /etc/swift
workers = 16
[pipeline:main]
pipeline = catch_errors proxy_logging cache bulk auth swiftauth proxy_logging proxy-server
[app:proxy-server]
use = egg:swift#proxy
allow_account_
account_autocreate = false
set log_name = proxy-server
client_timeout = 90
[filter:
use = egg:swift#
[filter:
use = egg: swift#catch_errors
set log_name = catch_errors
[filter:cache]
use = egg:swift#memcache
set log_name = proxy-swift-
[filter:auth]
use = egg:mymodule#
#this contains some configuration for custom auth module. Omitted here
[filter:swiftauth]
use = egg:mymodule#
operator_roles = StorageServiceA
is_admin = false
reseller_admin_role = <reseller_role>
#this is simply a reuse of the keystone auth with some additional code to interact with
#auth module above
[filter:
use = egg:swift#
[filter:
use = egg:swift#
[filter:bulk]
use = egg:swift#bulk
set log_name = swift-bulk
Please see question for full context: https:/
Changed in swift: | |
milestone: | none → 1.10.0-rc1 |
status: | New → Fix Committed |
Changed in swift: | |
importance: | Undecided → Medium |
Changed in swift: | |
status: | Fix Committed → Fix Released |
Changed in swift: | |
milestone: | 1.10.0-rc1 → 1.10.0 |
Sorry, this is using swift 1.8.0