Skip to content

Adds FilesystemUser Object for NodePublishVolumeRequests #99

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

Adds FilesystemUser Object for NodePublishVolumeRequests #99

wants to merge 1 commit into from

Conversation

casusbelli
Copy link

Now that per Node volume publication has been added we can
introduce the user mapping for filesystems discussed about earlier
in PR #30 .

This adds the Message type FilesystemUser. If a volume in a
NodePublishVolumeRequest has volume_capability.access_type set to
MountVolume, the filesystem user can optionally be set to a fixed
FileSystemUser object. This is done in order to enforce all file access
requests through this mount instance to be mapped to the herein given
user and group information.
As this mapping is optional, if the user is not set in this volume
publication the user and group values are taken as is from the
incoming filesystem access requests created in the container.

@casusbelli
Copy link
Author

Mmh, I have an issue with the go lang build and the corresponding checks but please feel free to comment & review.

@jdef
Copy link
Member

jdef commented Sep 19, 2017

I have an issue with the go lang build and the corresponding checks...

If you use the same version of golang and protoc that the CI is using, and you're rebuilding the bindings upon making changes to spec.md then it should all work out fine.

spec.md Outdated

// Credentials to authenticate this mapping request.
// This field is OPTIONAL.
Credentials mapping_credentials = 4;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm struggling to understand the need for this field. Are these the credentials of the user? There's already a credentials field for the RPC itself, why do we need even more here? Furthermore, a FilesystemUser object isn't a
"mapping request", as I understand it. To me, FilesystemUser is an identity declaration.

Can you provide a short workflow/illustration that explains how this new credentials field would be used by a plugin?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding this was a request from the last CIS sync conf call. People requested this to be added in order to authenticate the mapping request, personally I also think this could be covered by the existing credentials map but I wanted to follow up on the request.
The FilesystemUser object, if set, defines which identity all file system access requests through this mount is mapped to. This prevents e.g. using the user identity "root:root" inside the container to access files on the mounted file system as user root. The ID "root:root" is mapped to the given FilesystemUser in the backend during rwx access.

The idea of the new credentials field is to provide credentials for the authentication of the mapping setup with the backend. Currently I'm not aware of CO / backend supporting this specific authentication.

@chakri-nelluri
Copy link

chakri-nelluri commented Sep 20, 2017

This is better handled by the identity management system in the CO.
We can use credentials for attach/mount calls but these credentials will be totally different from the effective user & group ID/ FS user & group id the container will run as.
Drivers have very little knowledge to enforce these user/group IDs (or RBAC) at this layer.

@casusbelli
Copy link
Author

@chakri-nelluri I'm happy to remove the additional credential field, as written in the code comment I added it because it was requested. I don't really think it is required here as the request itself already supports the Credentials map which can contain authorization credentials if the mapping has to be authorized separately.

The user/group ID for the mapping however is required as described above. As inside the Container arbitrary user/group identities can be set, this FilesystemUser object allows the CO admins to pin a specific containers access to a filesystem to a fixed defined identity.
This FilesystemUser identity is not related to the user/group that the container is run as. It may even be unknown in the COs identity management. In such a case it would e.g. be provided by the container mount configuration by the CO admin via configuration management. It is a filesystem related identity and it only pins the identity of the filesystem access for the published mount.

For example, three containers on the same host may mount the same filesystem mount with:

  1. one container having unmapped access (uid/group from the process inside the container is used)
  2. the second container using a mapping that maps all filesystem access from container two to service_one/group_a
  3. the third container using a mapping that maps all filesystem access from container three to service_two/group_a
    The backend driver ensures the configured mappings for the mounts in the containers two and three are used instead of handing through the user/group identities accessing the filesystem inside the containers. The latter is what happens in container one as that uses a mount without a pinning user mapping.

More detailed motivation for this can be read at https://www.quobyte.com/blog/2017/03/17/the-state-of-secure-storage-access-in-container-infrastructures/ (I think it was already referenced in these discussions, apologies if it has already been read).

@casusbelli
Copy link
Author

I removed the additional credentials field as proposed by @jdef .

@casusbelli
Copy link
Author

Are there any more open issues with this pr (as there have been no further comments)?
I had this on the agenda for the last CS Community Sync but we didn't get to that topic, thus please simply reply in here.

@julian-hj
Copy link
Contributor

In CloudFoundry, user information will flow to the CSI plugins via the open service broker API X-Broker-API-Originating-Identity header. The specific format of the CO identity for cloudfoundry (and also Kubernetes) for the service broker API is detailed here

So, it appears that for CF, we can meet the minimal requirements of this PR without a lot of effort. We would not have group information however.

@oritnm
Copy link

oritnm commented Nov 2, 2017

@julian-hj Can you provide the username (deduct from the X-Broker-API-Originating-Identity header)? from the pointed page it is unclear (to me) whether you can provide information which then can be used by the SP to identify the user for IO operation.

@julian-hj
Copy link
Contributor

@oritnm I infer from the link that any service request that was initiated by an end user will have the username included as user-id in that header, so, yes.

@casusbelli
Copy link
Author

Note: This is waiting for the action item from 2017-11-01 [1] to see if COs can agree on a simple common FilesystemUser spec.

[1] https://docs.google.com/document/d/1-oiNg5V_GtS_JBAEViVBhZ3BYVFlbSz70hreyaD7c5Y/edit#

@casusbelli
Copy link
Author

CO discussion resulted in postponing this till after 0.1 spec release.

@casusbelli
Copy link
Author

@jieyu Hi! Have there been any updates regarding the CO group discussions regarding a common filesystem user field? Perhaps we can discuss this in tomorrows CSI Community sync?

@jdef jdef added this to the v1.0 milestone Jan 24, 2018
This adds the Message type FilesystemUser. If a volume in a
NodePublishVolumeRequest has volume_capability.access_type set to
MountVolume, the filesystem user can optionally be set to a fixed
FileSystemUser object. This is done in order to enforce all file access
requests through this mount instance to be mapped to the given
user and group information.
As this mapping is optional, if the user is not set in this volume
publication the user and group values are taken as is from the
incoming filesystem access request created in the container.
If block storage instead of a file system is used this field is
ignored.
@jdef
Copy link
Member

jdef commented Oct 4, 2018

This is pretty stale. We're talking about releasing CSI v1.0 soon and I'm not sure this will be completed by then. Re-labeling as post-1.0

@jdef jdef added the post-v1 label Oct 4, 2018
@jdef jdef removed this from the v1.0 milestone Oct 4, 2018
@jdef jdef added the lifecycle/stale PR has been inactive for 90+ days. label Oct 4, 2018
@saad-ali saad-ali added lifecycle/rotten Stale PR has been inactive for 30+ days. and removed lifecycle/stale PR has been inactive for 90+ days. labels Nov 7, 2018
@saad-ali
Copy link
Member

saad-ali commented Nov 7, 2018

@shay-berman
Copy link

Any progress here @casusbelli?
Sounds like important API improvement for secure the mount point of the PV on the worker node.

BTW why the "namespace" is needed in the struct?

without this fix in the API, is there other standard why to identify the mount options of the filesystem?

@saad-ali
Copy link
Member

saad-ali commented Mar 6, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Stale PR has been inactive for 30+ days. post-v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants