Skip to content

How to set an annotation on a CSI PV ? #368

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
humblec opened this issue Jun 6, 2019 · 13 comments
Open

How to set an annotation on a CSI PV ? #368

humblec opened this issue Jun 6, 2019 · 13 comments

Comments

@humblec
Copy link
Contributor

humblec commented Jun 6, 2019

The dynamically provisioned PVs in kubernetes ( without CSI) make use of some PV annotations while the POD attach PVC to it. For example ,an annotation like volutil.VolumeGidAnnotationKey can be used to set the GID of the volume and if this is specified, the GID will be part of SGID array while POD is spawned and thus POD user get access to the volume (PV) . This is an important security setting which storage providers make use of in kubernetes CO today. With CSI, I dont see a mechanism to set an annotation in PV and thus no solution to this problem.

How can I set a dynamic GID in PV annotation for a volume while using CSI ?

Making the volume (PV) world readable/writable is not the solution or its not going to work out.
Can you please let me know if there is a way to work around or solve this issue in CSI? if no solution exist, ( atleast I dont see ) its very difficult for storage vendors to support CSI based PVs and its a serious issue!!

@humblec
Copy link
Contributor Author

humblec commented Jun 6, 2019

@jsafrane @saad-ali

@jdef
Copy link
Member

jdef commented Jun 6, 2019

is this related to #30 #99 ?

@Akrog
Copy link
Contributor

Akrog commented Jun 7, 2019

I believe this is the same issue that Luis brought up on the last meeting and is being discussed in #369

@humblec
Copy link
Contributor Author

humblec commented Jun 7, 2019

is this related to #30 #99 ?

Let me go through those issues in detail, In one angle it looks similar however it does not look like a complete match. But let me spend some time on referenced issues before I confirm @jdef.

@humblec
Copy link
Contributor Author

humblec commented Jun 7, 2019

@msau42 @gnufied jfyi

@humblec
Copy link
Contributor Author

humblec commented Jun 12, 2019

I believe this is the same issue that Luis brought up on the last meeting and is being discussed in #369

I feel, its not exactly the case I am talking about. In short, before CSI, the storage plugins had access to the PV it create, which was consumed by the CO like kubernetes. But with CSI, I dont see that access to the PV for the SP driver. The use case I listed in problem description is important when we consider feature parity between previous drivers and CSI. Especially the gid annotation is important because that control the access to the volume from CO to Storage backend. This also comes handy when some orchestrator/operators/controllers looks for such marks for further processing.We dont have to "open up" the entire spec of PV for SP. Let that control be with CO translator/CSI in tree plugin. But there should be a provision for SP to pass these and CO put those in a specific field, for ex: annotation. That should solve it. The closest solution I can think of here is volume contexts, but it looks like insufficient for this use case.

@Akrog
Copy link
Contributor

Akrog commented Jun 24, 2019

I believe this is the same issue that Luis brought up on the last meeting and is being discussed in #369

I feel, its not exactly the case I am talking about. In short, before CSI, the storage plugins had access to the PV it create, which was consumed by the CO like kubernetes. But with CSI, I dont see that access to the PV for the SP driver. The use case I listed in problem description is important when we consider feature parity between previous drivers and CSI. Especially the gid annotation is important because that control the access to the volume from CO to Storage backend. This also comes handy when some orchestrator/operators/controllers looks for such marks for further processing.We dont have to "open up" the entire spec of PV for SP. Let that control be with CO translator/CSI in tree plugin. But there should be a provision for SP to pass these and CO put those in a specific field, for ex: annotation. That should solve it. The closest solution I can think of here is volume contexts, but it looks like insufficient for this use case.

Even if #369 refers to PVC and you are referring to PVs, from my point of view it is the same issue, a CO does not have the necessary plumbing to maintain feature parity, which doesn't seem relevant to the CSI spec, that doesn't know anything about PVC, PV, gid anotation or annotation, and it shouldn't know about those, as they are Kubernetes specific and the CSI spec should be CO agnostic.

If you really need that information you can always add that functionality to the external-provisioner sidecar or create your own sidecar for Kubernetes.

@msau42
Copy link

msau42 commented Jun 24, 2019

@humblec thinking from a use-case perspective, what would the SP do in your case with the gid information?

@humblec
Copy link
Contributor Author

humblec commented Jun 26, 2019

@humblec thinking from a use-case perspective, what would the SP do in your case with the gid information?

Let me elaborate the use case , Whenever SP provision a new volume, SP also create a "user/group" as the owner of that volume. The permissions of the volume ( Read/Wrtie/Exec) belongs to that particular user and the ownership has been set on the volume for that newly created user/GID. SP can do this today without help from CO/CSI spec. But, when the CO ( kubernetes) attach this volume to a POD , unless this GID is part of supplemental group of that POD, the access is not granted to the volume. At present, we ( as you know ) tackle this scenario in kubernetes with an annotation in the PV called volutil.VolumeGidAnnotationKey. While carving the PV spec, SP driver ( for example: https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/glusterfs/glusterfs.go#L741) put that annotation for CO/kubernetes to consume while pod is spawned thus the access is granted via SGID. This is not specific to GlusterFS or CephFS. Any FS which need to restrict the access for a user per volume make use of it. In CSI, I dont see a way to control this from SP driver side or the mechanism is not available for SP to instruct CO/kubernetes to do it on behalf.

From CO/CSI pov, this request or workflow doesnt even need to be generated from createVolume(), rather it can be a POST action once SP returns the createVolumeResponse.

I hear that, SP can tackle this with 'forking sidecar', but considering this is a very generic use case , I am seeking input to solve it in generic API.

@saad-ali @bswartz @lpabon @childsb FJYI

@msau42
Copy link

msau42 commented Jun 26, 2019

When provisioning, how does the SP decide which user/group and permissions to give? For setting supplemental group in the pod, can the user not do that already with Pod.SecurityContext?

@humblec
Copy link
Contributor Author

humblec commented Jun 27, 2019

When provisioning, how does the SP decide which user/group and permissions to give?

Its completely a 'new ( random) user' creation at volume provisioning time by SP.

For setting supplemental group in the pod, can the user not do that already with Pod.SecurityContext?

Unfortunately no. The reason being the PVCs are available in a namespace for the PODs to randomly consume. The POD authors doesnt have any control or knowledge about these GIDs attached/permitted on a volume. Thats one of the reason for the introduction of that specific GID annotation and an indirect injection mechanism here(https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/volumemanager/volume_manager.go#L282).

@msau42
Copy link

msau42 commented Jun 27, 2019

How does the SP ensure that the "randomly" generated user/group aligns within the allowed range set by the administrator in PodSecurityPolicy? https://kubernetes.io/docs/concepts/policy/pod-security-policy/#users-and-groups.

Should the access control instead be specified by the admin via the CO and passed down to the SP during provisioning, which potentially implies some field in CreateVolume? Can access control of a volume change after provisioning?

@humblec
Copy link
Contributor Author

humblec commented Aug 8, 2019

@msau42 apologies for missing the notification.

How does the SP ensure that the "randomly" generated user/group aligns within the allowed range set by the administrator in PodSecurityPolicy? https://kubernetes.io/docs/concepts/policy/pod-security-policy/#users-and-groups.>

With intree this is handled from SC where admin can specify the gid range. IOW, minGid and maxGid. The GIDs are created in this range.

Should the access control instead be specified by the admin via the CO and passed down to the SP during provisioning, which potentially implies some field in CreateVolume? >

Control knob in SC also work as mentioned in above.

Can access control of a volume change after provisioning? >

Actually no. The reason being the storage backend has been set with whatever passed at time of volume creation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants