-
Notifications
You must be signed in to change notification settings - Fork 378
NodePublish – Secure access to shared mounted volumes #17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Good point, i'm looking at this aspect, too. |
That brings up an interesting point to the current workflow. Currently, the spec assumes that the CO will publish the volume to the node once. If there are multiple containers on the node that want to access to the volume, the CO will be responsible for bind mount the volume into the container. What if these two containers have different identities? |
This is a very good catch. We handle this use case in Cloud Foundry, but CF has a slightly different flow today. Our version of the "controller" plugin is filled by an implementation of the CNCF Open Service Broker API. That API includes a "Bind" endpoint that allows the plugin to do some work when the volume is associated with a container workload (or "application" in CF parlance). The service broker plugin has the opportunity to create credentials in that binding response that are eventually handed back to the node plugin at mount time. Maybe we need to consider adding some similar endpoint to the master API? IMO it will be difficult for the CO to know how to concoct the credentials data that is needed by the node plugin. In CF we have a few different service brokers that all use the same volume driver (or node plugin) to mount nfs shares, but have totally different schemes to assign UIDs to a running application. |
I think it makes sense to always call If we later realize that it's too much then we can just switch to making only 1 publish/unpublish call and it should be no difference to the plugin itself, but going the other way (adding multiple publish/unpublish later) is more difficult. |
One option would be allow a CO to call To solve the user/group issue, we still need to introduce parameters that a CO can set differently for each @cpuguy83, to your comment: Unlike DVDI, the CSI spec asks the CO to specify the |
IMHO, NodePublish is required for every container that will be using this volume, different target_path and user credential should be able to be provided for every container independently |
@oritnm The reason we don't want |
@jieyu understood but since the nodepublish request needs to be idempotent and the CO can chose to call it again in a case it didn't get the response then the Storage Plugin needs to handle this situation any way (even in the case of not having a specific access mode for it) |
@jieyu I understand that Also: whall we file another issue to point out this specific shortcoming and decouple its discussion. This issue is more about how to implement secure access. |
@oritnm that's a good point. We were thinking that the idempotency of If we want to go with the per-container publish route, I think we need to figure out what exactly are the per-container information that CO will pass to the Plugin, how that can be generalized so that it's meaningful to all COs, and how that information can be validated by the CO. |
Thank you @oritnm. Will follow up with the working group on this asap. |
FYI: I squashed the range of commits in PR #30 to single concise commit, see there for details. |
There are a few different levels of authentication/authorization.
|
Reproducing my comment on PR 83 here: I'm not clear on how the CO will know what credentials to pass to a given plugin. Some plugins will require I'd prefer a set of official strategies like That way the CO only needs to support a small set of authentication strategies that are clearly specified by this spec. This let's CO integrate with new plugins without changing code or leaking SP-specific details through the CSI plugin or around it (through SP-specific documentation around authentication). One of the best things about the CSI is that the CO doesn't need to write special code for every SP. Having to read the SP documentation to figure out what keys are expected and how to format the values seems counterproductive. |
I think we can use a |
@cpuguy83 Interesting concept, I've never seen that before. That may be useful although (if I take your meaning) it would appear to require a working internet connection from the cluster to the SP - which given that the actual data resides at the SP sounds quite reasonable to me. But still, how will the CO know how to populate whichever fields are declared in the response? I'm not clear how authentication can be opaque to the CO. It seems that fundamentally the CO needs to get details about the user to the SP through the CSI plugin? Depending on which SP, the exact details will be different. I imagine a CO cluster configured with two SPs (SP-1 and SP-2). Both SPs require the end-user to be authenticated to determine whether the volume may be published RW vs. RO vs. not at all as well as for auditing that user's actions performed on the data once published. SP-1 requires that the end-user authenticate using a username+password combination. This information is likely stored as a secret by the CO and should accompany any Create/Destroy/*Publish/etc. RPCs to the CSI plugin for SP-1. Perhaps there's some mapping between secret name and SP credential which let's the CO determine which secret is the SP-1 username and which is the SP-1 password? SP-2 is integrated with the specific CO cluster through some Single Sign-On component and expects the CO to pass it a valid authentication token that represents the end-user. It seems to me that the CO will need to perform quite a lot of authentication-specific logic from when the user launches the application until some volume is published to a Node. If the spec provides canonical messages defining credential types for a few common authentication strategies the CO can build generic authentication flows regardless of which CSI plugin / SP the 'CreateVolume' call gets scheduled to. At least, I hope so. |
I think only the user interface needs to know the specifics of the auth
protocol.
With the `Any` concept, this can be determined by passing the protobuf type
URL (which can be found by querying the SP).
This is of course all really easy to say, I've not attempted to implement
something like this before.
…On Thu, Aug 17, 2017 at 9:47 AM, Gustav Paul ***@***.***> wrote:
@cpuguy83 <https://github.com/cpuguy83> Interesting concept, I've never
seen that before. That may be useful although (if I take your meaning) it
would appear to require a working internet connection from the cluster to
the SP - which given that the data is at the SP sounds quite reasonable to
me. But still, how will the CO know how to populate whichever fields are
declared in the response?
I'm not clear how authentication can be opaque to the CO. It seems that
fundamentally the CO needs to get details about the user to the SP through
the CSI plugin?
Depending on which SP, the exact details will be different.
I imagine a CO cluster configured with two SPs (SP-1 and SP-2).
Both SPs require the end-user to be authenticated to determine whether the
volume may be published RW vs. RO vs. not at all as well as for auditing
that user's actions performed on the data once published.
SP-1 requires that the end-user authenticate using a username+password
combination. This information is likely stored as a secret by the CO and
should accompany any *Publish requests to the CSI plugin for SP-1. Perhaps
there's some mapping between secret name and SP credential which let's the
CO determine which secret is the SP-1 username and which is the SP-1
password?
SP-2 is integrated with the specific CO cluster through some Single
Sign-On component and expects the CO to pass it a valid authentication
token that represents the end-user.
It seems to me that the CO will need to perform quite a lot of
authentication-specific logic from when the user launches the application
until some volume is published to a Node.
If the spec provides canonical messages defining credential types for a
few common authentication strategies the CO can build generic
authentication flows regardless of which CSI plugin / SP the 'CreateVolume'
call gets scheduled to. At least, I hope so.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#17 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAwxZpvwuUj50L9ebm9AkSYHo3XD1jhNks5sZER4gaJpZM4NmXhb>
.
--
- Brian Goff
|
I like the direction this is going. It would appear that this way the user could specify a SP for a given application (ie., specify the Any URL) and pass a list of key/value pairs appropriate for the SP where the keys would be I would rather not force the user to specify the SP for a given application. I hope to present the user with abstract 'storage profiles' like |
Perhaps users can be required to enter their SP-specific credentials once (as secrets) for every SP instead of once per action. These could be opaque sets of k/v pairs as specified by the SP authentication documentation for static credentials or the user could select "Share Auth token" for an SP that requires authn via SSO. It sounds like the CO could build such flows given PR 83. |
@gpaul Let's carry on the conversation in #83 (comment) because it is directly related to that PR. I'll reproduce your comments there and add my response. |
I think this issue has now at least two topics that I would argue are orthogonal:
If we solve 2), we are able to make a file system available on the host and establish a volume mapping, which is when 1) applies. Originally this issue was only about 1) and I still hope that we can conclude on this topic soon, as I don't really see any design alternatives and a solution is mandatory for being able to do file system access control (as presumably any production setup would like to). Ultimately CSI is about establishing file system namespace mappings, and on at least both Linux and Windows, file systems are accessed by users and potentially groups, so I'd claim we don't introduce any OS dependencies. We have have updated our proposal and submitted it as #99. Looking forward to feedback and comments. |
We see couple of challenges with the current model.
To provide authorized access to shared mounted volume, the IO request need to be accounted on behalf of a user known to the storage provider allowing authorizations such as POSIX and others to take place.
The user/group identity should be added along with secrets to the node publish request. (Using the volume metadata is insufficient since it would require all containers to use the same user identity).
Can see https://www.quobyte.com/blog/2017/03/17/the-state-of-secure-storage-access-in-container-infrastructures/
https://github.com/sigma/cifs_k8s_plugin
The text was updated successfully, but these errors were encountered: