You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello.. attached are some comments / thoughts on the crypto aspect. I think the overall goal should probably be secure, flexible, avoid really opinionated decisions, but also be decisive enough to avoid implementation ambiguity.
The most sensible way forward on this part of the standard I think it to make this optional until it's been reviewed and is mostly supported. It might make sense here to use Digests as a stepping stone to Digital Signatures.
-## 3 Cryptographic signatures
-NOTE: I know this is sounding crazy, but it just might work! The main problem is that this is very slow.
Every file in the container’s root filesystem must be read. It is, however, very flexible and quite portable. Some things to decide here:
-* How carefully do we specify the digest file? (bad, but accurate name)
Well that's easy ....I think.. just make the container digest manifest recommended but optional like the .asc digital sig in rocket for the time being, that way it doesnt break anything existing.
like...
container.img (container file / archive format for distribution)
container.sig (digital signature)
container.sha256 (sha digest)
There's a few options overall when speaking of signatures, hashes, etc... I'd think:
(1) Do Nothing
(2) Digest: SHA2 / SHA3 sum of container file (only provides very basic integrity , but low barrier to implementation)
(3) HMAC: (not really useful in this context , so this is probably out)
(4) Digital Signature: (this is the gpg style sigs like in Ubuntu PPA, Rkt, etc. Integrity + Authentication + Non-Repudiation)
I'd say at minimum you need SHA2/SHA3 digest of the container file, and probably very important to get to a digital signature in the standard sooner than later. But Digest could be a step along the way to Signatures... Anyway the digital signature should support open standards either way, which brings me to...
Digital Signature Formats:
You've got two or three options here I'd think for the sigs. Basically you can put them in CMS (aka PKCS#7 aka RFC 5083) format, or you can put them in OpenPGP format (aka RFC4880). Both formats are equally annoying.
The CMS format is ASN.1 while OpenPGP is ascii armor. The former is used in S/MIME, the latter in OpenPGP. Regarding TUF, while I like how it uses JSON and I love how it's in Python.... However.. if you look at the TUF standard you can see (A) the project doesn't seem to be quite production-ready yet , though it is promising, and (B) the TUF standard does not have the million implementations that the former two have.
But I would definitely say pick one digital signature format, rather than allow users to implement all three. That'd be a mistake ... I think for digital signatures anyway, there's just so much that can go wrong already, best to keep it concise and clear. Honestly I'd vote for OpenPGP here.
So either way I suppose .... you're gonna have to use an external lib or tool in either case. Probably better just to make it like SHA256 or SHA512 and/or OpenPGP.
-To ensure that containers can be reliably transferred between implementations and machines, we define a flexible hashing and signature system that can be used to verify the unpacked content. The generation of signatures is separated into three different steps, known as "digest", “sign” and “verify”.
K... If you verify (digital signature and/or authenticated encryption) the packed content, you also verify the unpacked content. They are the same as far as I'm aware, at least for a given instant in time.
Meaning: If an attacker flips any one bit in the compressed container image, the uncompressed container will fail validation just the same as if the attacker flips any one bit in the uncompressed filesystem. So there only really needs to be one validation here I'd think, at the container level , but I certainly could be missing something ...
Unless we are talking about protection of data-at-rest or files-in-a-container-on-my-laptop-from-day-to-day
Okay, first, let's separate the two mentions above about digest, sign, verify... I like how they are seperated into steps ... I actually support that ...
So first we separate what are Digests/Hashes from Digital Signatures.. Then we can separate them again based on whether they are at the Container Level (single container file) or at the internal filesystem level (within the container uncompressed filesystem).
I'd suggest going for the low hanging fruit and define these in regards to the container first, in light of the idea that a bit flip in either the container or the filesystem is one and the same . That's the really important part.
Adding filesystem-level hashes is probably a good idea, but might be a bit much at this point. I can already (maybe?) see issues arising with OverlayFS etc. . . I wonder if you've got multiple layers there might be a bit of interpretation about when to calculate the changes, how to perform the merge, whiteout files, etc in regard to sigs and digests.
On top of that .... as much as I like the digest-of-all-files idea personally (having hashes/digests and/or digital sigs of all files in the container filesystem)... this seems like it's almost a re-implementation of something like Tripwire.
-The purpose of the "digest" step is to create a stable, summary of the content, invariant to irrelevant changes yet strong enough to avoid tampering. The algorithm for the digest is defined by an executable file, named “digest”, directly in the container directory. If such a file is present, it can be run with the container path as the first argument:
Well , running a digest/hash AND/OR digital signature on a compressed container image is the sort of critical juncture. Meaning: If I get owned... it's gonna be when I download and run a container image from a host somewhere in the Russian Business Network autonomous system , heh.
Not when I start the same container for the second time in a day.
Anyway, the point at which I first run teh container... that's (I think) where malicious code is most likely to enter my system , my container network, etc -- when I grab a container image someone made from a repository .
If I download that image and it does no harm, and then a week later I create a file on the filesystem... I don't see how that's the same risk as when I first obtain the container archive... So again, I like the idea of having a digest of the files in the container ... buuttttt
There's just alot of questions with maintaining a list of hashes within the container's filesystem... .I suppose it wouldn't hurt to make it optional. . . Anyway .. with a digest / signature of the packed container , it's pretty straightforward to unpack and verify, and it's also straightforward for me to repack and/or sign if I want to share the image.
Maintaining the digest manifest of some or all container files is not a bad idea at all, but it's gonna be slowww in the event of a large filesystem, especially considering we can't use SHA1 anymore, so we have to use SHA2 or SHA3 , and that could be painful with really large files....
Another issue I'm wondering is at what point do we overwrite the 'digest manifest' , to say "Yes, I modified some file like /etc/blah.conf, and I want to update it's SHA2 digest AND it's digital signature"... It just seems like it could create a lot of headache for users , but not really offer much in terms of security ... Like the standards would have to define the conditions on which to overwrite the digest manifest with user approval...
Anyway, so I suppose what I'd say is that I'd be opposed to a digest of the entire container filesystem on a file-by-file basis... it might make sense to include a digest of certain key directories and binaries, but even still , the concern I'd have there is "How do you present to the user such that they are warned about filesystem changes , but they can also accept or reject them as line-items?" That's really what integrity would mean here, and it'd be way granular and in the weeds.
As opposed to just 'container matches' or 'container does not match' like how OpenSSH works with signatures.
I mean maintaining user-friendliness is not the responsibility of this standard, but it just seems like it'd be creating a nightmare scenario up the stack in terms of repackaging a container back into an image, or warning a user about fileysstem changes one-by-one etc. I think if file-level digest is included, it's got to be optional or else really really restricted by default. But I just think it's not any additional security, not in the context of what a digest does... and that's provide integrity in the event of malicious 'bit-flip' type attacks...
The best bet in my mind is probably a layered approach... perhaps requiring a Digest/Hash (mandatory mostly) of the final container image.. but auto-generated and super idiot proof ... and then ALSO making an OpenPGP Digital Signature of the final container image optional but recommended.
I think that approach gives a strong compromise between implementation headache, usability, simplicity, etc.
-$ $CONTAINER_PATH/digest $CONTAINER_PATH
-```
-The nature of this executable is not important other than that it should run on a variety of systems with minimal dependencies. Typically, this can be a bourne shell script. The output of the script is left to the implementation but it is recommend that the output adhere to the following properties:
-* The script itself should be included in the output in some way to avoid tampering
This is a good idea and important, but gotta remember that digest will only ensure integrity in relation to some __external trust anchor that might even be malicious. __. We really need digital signatures to really prevent tampering... Because otherwise how do we know we haven't gotten a fake SHA-sum
Really we need to start with digest, but make digital signatures recommended but not required... this way we can achieve authentication + integrity.
-* The output should include the content, each filesystem path relative to the root and any other attributes that must be static for the container to operate correctly
Sure , this make sense as far as making like a Tripwire-like feature for containers. Which I think is cool, and I wouldn't complain personally if it was in there... But I sure wouldnt' want to implement this, nor would I want to wait for my slow container to load b/c it's running filesystem checksums that are probably redundant... I'm just not 100% sure filesystem-level digests are necessary if you add in a SHA2/SHA3 digest and digital sigs of the container archive file...
-* The output must be stable
I think this might be a challenge .... heh... but I suppose if you limit the directories we are adding to the digest this could work...
Again, I don't see what benefit this approach has over simply just calculating a digest and digital sig over a squashfs compressed container... People probably are not going to be distributing 'loose-files' (aka non-tgz) uncompressed containers for the most part I'd think, so the security threat inherent in distribution is the main problem...
Data-at-rest can be secured using all the existing solutions already available , ala Tripwire, Snort, DM-Crypt, TrueCrypt, etc.
This all looks good below besides just the main objection... I definitely agree with the use of GPG / OpenPGP. X509 might not be an appropriate choice unless we all want to ride on top of the crappy CA infrastructure. I definitely do not want to do that , heh. . X509 kind of brings with it the rot of OpenSSL I'd fear.
I'd just say use GPG / OpenPGP keys, that's a tested and proven solution without the bloat of OpenSSL.
Sign
-The output, known as the container’s digest, can be signed using any cryptography system (pgp, x509, jose). The result of which should be deposited in the container’s top-level signatures directory.
-To sign the digest, we pipe the output to a cryptography tool. We can demonstrate the concept with gpg:
- -$ $CONTAINER_PATH/digest $CONTAINER_PATH | gpg --sign --detach-sign --armor > $CONTAINER_PATH/signatures/gpg/signature.asc -
-Notice that the signatures have been added to a directory, "gpg" to allow multiple signing systems to coexist.
-### Verify
-The container signature can be verified on another machine by piping the same command output, from the digest to a verification command.
-Following from the gpg example:
- -$CONTAINER_PATH/digest $CONTAINER_PATH | gpg --verify $CONTAINER_PATH/signatures/gpg/signature.asc - -
The rest of this all looks good to me; thanks for reading my comments
The text was updated successfully, but these errors were encountered:
We've taken this part of the specification in a slightly different direction, mostly due to many of the points you've brought up here. After some research, we found that several transports fail to properly transmit file system meta data. The need for a transport-agnostic manifest addresses this issue while also providing a signable target.
Yeah let's close it , this was just to share some thoughts .
This is a non-issue, since these comments are likely obsolete -- Please feel free to close this out and I'll go read the updated info in #5 and #11 , since those threads seem more current.
I'll comment on those threads instead if I have anything interesting to add.
Hello.. attached are some comments / thoughts on the crypto aspect. I think the overall goal should probably be secure, flexible, avoid really opinionated decisions, but also be decisive enough to avoid implementation ambiguity.
The most sensible way forward on this part of the standard I think it to make this optional until it's been reviewed and is mostly supported. It might make sense here to use Digests as a stepping stone to Digital Signatures.
Well that's easy ....I think.. just make the container digest manifest recommended but optional like the .asc digital sig in rocket for the time being, that way it doesnt break anything existing.
like...
container.img (container file / archive format for distribution)
container.sig (digital signature)
container.sha256 (sha digest)
There's a few options overall when speaking of signatures, hashes, etc... I'd think:
(1) Do Nothing
(2) Digest: SHA2 / SHA3 sum of container file (only provides very basic integrity , but low barrier to implementation)
(3) HMAC: (not really useful in this context , so this is probably out)
(4) Digital Signature: (this is the gpg style sigs like in Ubuntu PPA, Rkt, etc. Integrity + Authentication + Non-Repudiation)
I'd say at minimum you need SHA2/SHA3 digest of the container file, and probably very important to get to a digital signature in the standard sooner than later. But Digest could be a step along the way to Signatures... Anyway the digital signature should support open standards either way, which brings me to...
Digital Signature Formats:
You've got two or three options here I'd think for the sigs. Basically you can put them in CMS (aka PKCS#7 aka RFC 5083) format, or you can put them in OpenPGP format (aka RFC4880). Both formats are equally annoying.
I noticed TUF got mentioned as well, so I included that... Here is the link to the standard for TUF...
https://github.com/theupdateframework/tuf/blob/develop/docs/tuf-spec.txt
The CMS format is ASN.1 while OpenPGP is ascii armor. The former is used in S/MIME, the latter in OpenPGP. Regarding TUF, while I like how it uses JSON and I love how it's in Python.... However.. if you look at the TUF standard you can see (A) the project doesn't seem to be quite production-ready yet , though it is promising, and (B) the TUF standard does not have the million implementations that the former two have.
But I would definitely say pick one digital signature format, rather than allow users to implement all three. That'd be a mistake ... I think for digital signatures anyway, there's just so much that can go wrong already, best to keep it concise and clear. Honestly I'd vote for OpenPGP here.
So either way I suppose .... you're gonna have to use an external lib or tool in either case. Probably better just to make it like SHA256 or SHA512 and/or OpenPGP.
K... If you verify (digital signature and/or authenticated encryption) the packed content, you also verify the unpacked content. They are the same as far as I'm aware, at least for a given instant in time.
Meaning: If an attacker flips any one bit in the compressed container image, the uncompressed container will fail validation just the same as if the attacker flips any one bit in the uncompressed filesystem. So there only really needs to be one validation here I'd think, at the container level , but I certainly could be missing something ...
Unless we are talking about protection of data-at-rest or files-in-a-container-on-my-laptop-from-day-to-day
Okay, first, let's separate the two mentions above about digest, sign, verify... I like how they are seperated into steps ... I actually support that ...
So first we separate what are Digests/Hashes from Digital Signatures.. Then we can separate them again based on whether they are at the Container Level (single container file) or at the internal filesystem level (within the container uncompressed filesystem).
I'd suggest going for the low hanging fruit and define these in regards to the container first, in light of the idea that a bit flip in either the container or the filesystem is one and the same . That's the really important part.
Adding filesystem-level hashes is probably a good idea, but might be a bit much at this point. I can already (maybe?) see issues arising with OverlayFS etc. . . I wonder if you've got multiple layers there might be a bit of interpretation about when to calculate the changes, how to perform the merge, whiteout files, etc in regard to sigs and digests.
On top of that .... as much as I like the digest-of-all-files idea personally (having hashes/digests and/or digital sigs of all files in the container filesystem)... this seems like it's almost a re-implementation of something like Tripwire.
Well , running a digest/hash AND/OR digital signature on a compressed container image is the sort of critical juncture. Meaning: If I get owned... it's gonna be when I download and run a container image from a host somewhere in the Russian Business Network autonomous system , heh.
Not when I start the same container for the second time in a day.
Anyway, the point at which I first run teh container... that's (I think) where malicious code is most likely to enter my system , my container network, etc -- when I grab a container image someone made from a repository .
If I download that image and it does no harm, and then a week later I create a file on the filesystem... I don't see how that's the same risk as when I first obtain the container archive... So again, I like the idea of having a digest of the files in the container ... buuttttt
There's just alot of questions with maintaining a list of hashes within the container's filesystem... .I suppose it wouldn't hurt to make it optional. . . Anyway .. with a digest / signature of the packed container , it's pretty straightforward to unpack and verify, and it's also straightforward for me to repack and/or sign if I want to share the image.
Maintaining the digest manifest of some or all container files is not a bad idea at all, but it's gonna be slowww in the event of a large filesystem, especially considering we can't use SHA1 anymore, so we have to use SHA2 or SHA3 , and that could be painful with really large files....
Another issue I'm wondering is at what point do we overwrite the 'digest manifest' , to say "Yes, I modified some file like /etc/blah.conf, and I want to update it's SHA2 digest AND it's digital signature"... It just seems like it could create a lot of headache for users , but not really offer much in terms of security ... Like the standards would have to define the conditions on which to overwrite the digest manifest with user approval...
Anyway, so I suppose what I'd say is that I'd be opposed to a digest of the entire container filesystem on a file-by-file basis... it might make sense to include a digest of certain key directories and binaries, but even still , the concern I'd have there is "How do you present to the user such that they are warned about filesystem changes , but they can also accept or reject them as line-items?" That's really what integrity would mean here, and it'd be way granular and in the weeds.
As opposed to just 'container matches' or 'container does not match' like how OpenSSH works with signatures.
I mean maintaining user-friendliness is not the responsibility of this standard, but it just seems like it'd be creating a nightmare scenario up the stack in terms of repackaging a container back into an image, or warning a user about fileysstem changes one-by-one etc. I think if file-level digest is included, it's got to be optional or else really really restricted by default. But I just think it's not any additional security, not in the context of what a digest does... and that's provide integrity in the event of malicious 'bit-flip' type attacks...
The best bet in my mind is probably a layered approach... perhaps requiring a Digest/Hash (mandatory mostly) of the final container image.. but auto-generated and super idiot proof ... and then ALSO making an OpenPGP Digital Signature of the final container image optional but recommended.
I think that approach gives a strong compromise between implementation headache, usability, simplicity, etc.
This is a good idea and important, but gotta remember that digest will only ensure integrity in relation to some __external trust anchor that might even be malicious. __. We really need digital signatures to really prevent tampering... Because otherwise how do we know we haven't gotten a fake SHA-sum
Really we need to start with digest, but make digital signatures recommended but not required... this way we can achieve authentication + integrity.
-* The output should include the content, each filesystem path relative to the root and any other attributes that must be static for the container to operate correctly
Sure , this make sense as far as making like a Tripwire-like feature for containers. Which I think is cool, and I wouldn't complain personally if it was in there... But I sure wouldnt' want to implement this, nor would I want to wait for my slow container to load b/c it's running filesystem checksums that are probably redundant... I'm just not 100% sure filesystem-level digests are necessary if you add in a SHA2/SHA3 digest and digital sigs of the container archive file...
-* The output must be stable
I think this might be a challenge .... heh... but I suppose if you limit the directories we are adding to the digest this could work...
Again, I don't see what benefit this approach has over simply just calculating a digest and digital sig over a squashfs compressed container... People probably are not going to be distributing 'loose-files' (aka non-tgz) uncompressed containers for the most part I'd think, so the security threat inherent in distribution is the main problem...
Data-at-rest can be secured using all the existing solutions already available , ala Tripwire, Snort, DM-Crypt, TrueCrypt, etc.
This all looks good below besides just the main objection... I definitely agree with the use of GPG / OpenPGP. X509 might not be an appropriate choice unless we all want to ride on top of the crappy CA infrastructure. I definitely do not want to do that , heh. . X509 kind of brings with it the rot of OpenSSL I'd fear.
I'd just say use GPG / OpenPGP keys, that's a tested and proven solution without the bloat of OpenSSL.
Sign
-The output, known as the container’s digest, can be signed using any cryptography system (pgp, x509, jose). The result of which should be deposited in the container’s top-level signatures directory.
-To sign the digest, we pipe the output to a cryptography tool. We can demonstrate the concept with gpg:
-
-$ $CONTAINER_PATH/digest $CONTAINER_PATH | gpg --sign --detach-sign --armor > $CONTAINER_PATH/signatures/gpg/signature.asc -
-Notice that the signatures have been added to a directory, "gpg" to allow multiple signing systems to coexist.
-### Verify
-The container signature can be verified on another machine by piping the same command output, from the digest to a verification command.
-Following from the gpg example:
-
-$CONTAINER_PATH/digest $CONTAINER_PATH | gpg --verify $CONTAINER_PATH/signatures/gpg/signature.asc - -
The rest of this all looks good to me; thanks for reading my comments
The text was updated successfully, but these errors were encountered: