Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow to change mounter option from an existing PV #4691

Closed
FredNass opened this issue Jun 24, 2024 · 10 comments
Closed

Allow to change mounter option from an existing PV #4691

FredNass opened this issue Jun 24, 2024 · 10 comments
Labels
enhancement New feature or request wontfix This will not be worked on

Comments

@FredNass
Copy link

Describe the feature you'd like to have

Hello this issue is to discuss the feasibility to change the mounter type of an existing PV.

Over time, due to issues with ceph MDS balancer in active / active mode and kernel hangups, we had to change StorageClass's mounter type to fuse (instead of kernel). We now want to revert this change and use mounter type kernel instead.

In the meantime, all provisioned PVs where provisoned using a volumeAttributes.mounter set to fuse and this property cannot be changed with a kubectl patch, resulting in PVs still being mounted using ceph-fuse, despite the StorageClass was recreated with mounter kernel.

Is there a way to change the mounter type of existing PV? By hacking PV's etcd entry?

If not, can you provide a way to do so?

What is the value to the end user? (why is it a priority?)

Change the mounter type of an existing PV. Avoid having to move all data from a PV to another (with the need to stop the application) just to change the mounter type of a volume and regain better performances (with kernel mounter type).

How will we know we have a good solution? (acceptance criteria)

PVs previously using volumeAttributes.mounter: fuse and being mounted with ceph-fuse would be remounted with kernel after changing the property to volumeAttributes.mounter: kernel.

@FredNass
Copy link
Author

FredNass commented Jun 24, 2024

There's been a similar discussion here: #1887

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jun 24, 2024

@FredNass makes sense we can add it to the configmap and make it a dynamic one here

@ceph/ceph-csi-maintainers @ceph/ceph-csi-contributors thoughts?

@Madhu-1 Madhu-1 added the enhancement New feature or request label Jun 24, 2024
@FredNass
Copy link
Author

@Madhu-1 Thank you for considering it. If I understand correctly, you have in mind a new ConfigMap setting that would enforce the mounter to use either FUSE or kernel mounting, regardless of the PV's volumeAttributes.mounter attribute value. Is that correct?

Any reason why the mounter was set to the StorageClass initially?

Was it intentional to allow administrators to decide that some PVs should be mounted with FUSE instead of kernel, depending on the type and structure of the data they contain? (Although I doubt many of us are using two different CephFS StorageClasses in Kubernetes clusters, one with mounter: kernel and another with mounter: fuse.)

@nixpanic
Copy link
Member

I think this is a perfect case for #4662

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jun 25, 2024

@Madhu-1 Thank you for considering it. If I understand correctly, you have in mind a new ConfigMap setting that would enforce the mounter to use either FUSE or kernel mounting, regardless of the PV's volumeAttributes.mounter attribute value. Is that correct?

Any reason why the mounter was set to the StorageClass initially?

we didn't had any required for it so it was set in the SC.

Was it intentional to allow administrators to decide that some PVs should be mounted with FUSE instead of kernel, depending on the type and structure of the data they contain? (Although I doubt many of us are using two different CephFS StorageClasses in Kubernetes clusters, one with mounter: kernel and another with mounter: fuse.)

i forgot that i recently opened #4662 for this use-case where admin will be able to dynamically change the options for the existing PVC but for this, we need implementation on the cephcsi to store the details in the omap or the image metadata.

@nixpanic @Rakshith-R what your suggestion of storing options, omap or image/subvolume metadata?

@nixpanic
Copy link
Member

@nixpanic @Rakshith-R what your suggestion of storing options, omap or image/subvolume metadata?

image/subvolume metadata is my preference. That way things are kept easily with the storage. Certain RBD features use keys in the RBD-image metadata (like I/O bandwidth throttling rbd-nbd).

Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the wontfix This will not be worked on label Jul 25, 2024
@FredNass
Copy link
Author

Preventing closure during summer.

@github-actions github-actions bot removed the wontfix This will not be worked on label Jul 26, 2024
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the wontfix This will not be worked on label Aug 25, 2024
Copy link

github-actions bot commented Sep 1, 2024

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

3 participants