cinder feature volume_copy_bps_limit needs package libcgroup
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
kolla-ansible |
Triaged
|
Undecided
|
Unassigned |
Bug Description
kolla version: 9.0.1
Node OS: CentOS 7
kolla_base_distro: CentOS
openstack_release: train
kolla_install_type: source
To limit the bandwith/disk pressure of volume migrations you can make use of cinder's rate limiting feature. It is described here: https:/
The documentation also says that you need to have a package installed whose name is dependent on the OS:
> This feature requires libcgroup to set up blkio cgroup for disk I/O bandwidth limit. The libcgroup is provided by the cgroup-bin package in Debian and Ubuntu, or by the libcgroup-tools package in Fedora, Red Hat Enterprise Linux, CentOS, openSUSE, and SUSE Linux Enterprise.
Is there a generic way to make kolla-ansible add packages to certain containers or is there a deploy/reconfigure hook that I could use as workaround to add such customizations?
When I activate this config flag without having this package installed, volume scheduling breaks and the following stacktrace appears in the cinder-volume.log:
Traceback (most recent call last):
File "/var/lib/
service.start()
File "/var/lib/
service_
File "/var/lib/
self.
File "/var/lib/
self.
File "/var/lib/
cgroup_name)
File "/var/lib/
cinder.
File "/var/lib/
return self.channel.
File "/var/lib/
raise exc_type(
OSError: [Errno 2] No such file or directory
You might want to rebuild the relevant image: https:/ /docs.openstack .org/kolla/ train/admin/ image-building. html
and then propose the change upstream.
I believe this might need more handling at Kolla Ansible side as well to be able to handle cgroups from container.
What bothers me even more is that bare cgroup tooling is considered deprecated as cgroups should be managed via systemd in systemd-based systems. Otherwise cgroups behave weirdly due to separate hierarchies - I guess it does not matter much if the limit is absolute rather than proportional but still...
Also, do note this feature seems limited regarding used backend. I would suspect running it with e.g. ceph would not have any effect, as it seems to affect locally-controlled bandwidth.
Please let me know if I can help you further.