-
Notifications
You must be signed in to change notification settings - Fork 378
How to set an annotation on a CSI PV ? #368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I believe this is the same issue that Luis brought up on the last meeting and is being discussed in #369 |
I feel, its not exactly the case I am talking about. In short, before CSI, the storage plugins had access to the |
Even if #369 refers to PVC and you are referring to PVs, from my point of view it is the same issue, a CO does not have the necessary plumbing to maintain feature parity, which doesn't seem relevant to the CSI spec, that doesn't know anything about PVC, PV, If you really need that information you can always add that functionality to the external-provisioner sidecar or create your own sidecar for Kubernetes. |
@humblec thinking from a use-case perspective, what would the SP do in your case with the gid information? |
Let me elaborate the use case , Whenever SP provision a new volume, SP also create a "user/group" as the owner of that volume. The permissions of the volume ( Read/Wrtie/Exec) belongs to that particular user and the ownership has been set on the volume for that newly created user/GID. SP can do this today without help from CO/CSI spec. But, when the CO ( kubernetes) attach this volume to a POD , unless this GID is part of supplemental group of that POD, the access is not granted to the volume. At present, we ( as you know ) tackle this scenario in kubernetes with an annotation in the PV called From CO/CSI pov, this request or workflow doesnt even need to be generated from createVolume(), rather it can be a POST action once SP returns the I hear that, SP can tackle this with 'forking sidecar', but considering this is a very generic use case , I am seeking input to solve it in generic API. |
When provisioning, how does the SP decide which user/group and permissions to give? For setting supplemental group in the pod, can the user not do that already with Pod.SecurityContext? |
Its completely a 'new ( random) user' creation at volume provisioning time by SP.
Unfortunately no. The reason being the |
How does the SP ensure that the "randomly" generated user/group aligns within the allowed range set by the administrator in PodSecurityPolicy? https://kubernetes.io/docs/concepts/policy/pod-security-policy/#users-and-groups. Should the access control instead be specified by the admin via the CO and passed down to the SP during provisioning, which potentially implies some field in CreateVolume? Can access control of a volume change after provisioning? |
@msau42 apologies for missing the notification.
With intree this is handled from SC where admin can specify the
Control knob in SC also work as mentioned in above.
Actually no. The reason being the storage backend has been set with whatever passed at time of volume creation. |
The dynamically provisioned PVs in kubernetes ( without CSI) make use of some PV annotations while the POD attach PVC to it. For example ,an annotation like
volutil.VolumeGidAnnotationKey
can be used to set theGID
of the volume and if this is specified, theGID
will be part of SGID array while POD is spawned and thus POD user get access to the volume (PV) . This is an important security setting which storage providers make use of in kubernetes CO today. With CSI, I dont see a mechanism to set an annotation in PV and thus no solution to this problem.How can I set a dynamic GID in PV annotation for a volume while using CSI ?
Making the volume (PV) world readable/writable is not the solution or its not going to work out.
Can you please let me know if there is a way to work around or solve this issue in CSI? if no solution exist, ( atleast I dont see ) its very difficult for storage vendors to support CSI based PVs and its a serious issue!!
The text was updated successfully, but these errors were encountered: