image-volume ambiguous configuration leads to Quota exceeded for resources: ['volumes']

Bug #2030755 reported by Aliaksandr Vasiuk
18
This bug affects 3 people
Affects Status Importance Assigned to Milestone
OpenStack Cinder Charm
In Progress
Undecided
Aliaksandr Vasiuk

Bug Description

Hello,

Have two openstack clusters with cinder backed by pure storage arrays.
1st cloud:
Ubuntu Focal, OpenStack Ussuri, cinder charm revision 607 from ussuri/stable, cinder packages version 16.4.1-0ubuntu1, Juju "2.9.42".
2nd cloud:
Ubuntu Focal, OpenStack Ussuri, cinder charm revision 607 from ussuri/stable, cinder packages version 16.4.2-0ubuntu2.2, Juju "2.9.42".

The image-volume cache is enabled with "juju config cinder-volume image-volume-cache-enabled=true".
Every time OpenStack volume is created from an image, I see in cinder logs:
```
Failed to create new image-volume cache entry. Error: Quota exceeded for resources: ['volumes']
```
I create volumes with:
```
openstack volume create --image d906b8bd-1f7f-4e46-b01b-b147e97be33f --size 6 --type pure-c-array-iscsi foo-bar-volume
# Or a VM with bootable volume
openstack server create foo-bar-vm --flavor m1.medium --image ubuntu-20.04 --network network --boot-from-volume 6
```

I checked that cinder charm generates cinder.conf with:
```
cinder_internal_tenant_project_id = services
cinder_internal_tenant_user_id = cinder
```

However, in the documentation [1] it is recommended to use IDs.

For some reason, maybe because of some internal Keystone kitchen, we have two "services" projects on both clouds. And one of the projects has a "cinder" user and another one has none. And I checked a number of other Focal Ussuri and even newly deployed Jammy Yoga clouds, they all have two "services" projects generated by Juju.

I tried to update `cinder.conf` manually and put explicitly the ID of the "cinder" user and the ID of the project that has this user configured. And restarted the cinder-volume service. After that the "Quota exceeded for resources" vanished and the cache started working on both clouds, volumes are created in a split second.

As it looks to me the situation with two "services" projects might be quite uneasy to resolve. So the most preferable fix would be to specify the exact IDs for "internal_tenant".

[1] https://docs.openstack.org/cinder/latest/admin/image-volume-cache.html

summary: - image-volume ambiguous configuration
+ image-volume ambiguous configuration leads to Quota exceeded for
+ resources: ['volumes']
Revision history for this message
Aliaksandr Vasiuk (valexby) wrote :

Hi,

Checked a bit the charm code yesterday. I think we can fix it by using admin_tenant_id in cinder.conf template.
Basically replace:
```
cinder_internal_tenant_project_id = {{ admin_tenant_name }}
```
with
```
cinder_internal_tenant_project_id = {{ admin_tenant_id }}
```
I double-checked that we store both the name and the id in the charm relation with keystone, and we should be able to use it when rendering the config.
I'm assigning the bug to myself, and hope to come up with a resolution soon.

Best Regards,
Alex.

Changed in charm-cinder:
assignee: nobody → Aliaksandr Vasiuk (valexby)
Changed in charm-cinder:
status: New → Confirmed
Revision history for this message
Aliaksandr Vasiuk (valexby) wrote :

Hi,

I was able to test the above change on `jammy-antelope` bundle, works great. I reproduced the bug without the change and fixed the bug with the change. Will propose the change request tomorrow.

Best Regards,
Alex.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-cinder (master)
Changed in charm-cinder:
status: Confirmed → In Progress
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.