[feature] add ability to specify desired storage class (ceph)
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Kubernetes Control Plane Charm |
Triaged
|
Wishlist
|
Unassigned |
Bug Description
Currently, k8s-master creates 2 storage classes when related to ceph.
ceph-ext4 and ceph-xfs
This appears to be making an assumption about how ceph is created. And, if removed via kubectl, these classes get recreated.
In our particular environment, we're using full ceph disks (OSD) that are not pre-formatted or mounted. So there is no xfs or ext4 in this environment. This can cause a confusing scenario for the customer.
I'm proposing that instead, we have the ability to specify if we're using a filesystem, or full disk (SSD|HDD).
so having a config option to allow that.
Then, perhaps either a config option, or just learn what the default storage class should be based upon the type of storage specified. I.e., if I say I have SSD storage, then configure the pool/class as SSD and create ceph-ssd as a class.
Changed in charm-kubernetes-master: | |
importance: | Undecided → Wishlist |
status: | New → Triaged |
From IRC:
<jhillman> knobby: well the ability to have a little more control of what classes get made, like again, in this scenario we won't have xfs or ext4, but instead block ssd. so perhaps options like "ceph-storage- type={block, fs,whatever} " then "ceoh-storage- pools={ whatever, list,possibly} "
<jhillman> then classes made to match that
<knobby> jhillman: so ceph-storage-type would say "create pools backed with these types" and then ceph-storage-pools would be the names of the storage classes in k8s tied to each of the pools?
<jhillman> knobby: that's what i'm thinking