-
Notifications
You must be signed in to change notification settings - Fork 0
resolving TUF target name from distribution download URL #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This sounds reasonable to me -- it's unlikely that Warehouse will be moving away from BLAKE2b anytime soon, so this could just be a constant that's baked into pip. Then again, it certainly would make migrating a pain, should that ever need to occur. But I expect that such a migration would require a total turnover of the package index links anyways, so perhaps that's not a big deal. |
In the very unlikely event of hash change happening, the worst outcome is that clients that did not upgrade in time would have to use "--disable-package-security" (or whatever it will be called) once to upgrade pip... so maybe this is reasonable For now, I'll work with the assumption that Metadata name is the filename without fragments with enough preceding path components to form a 256-bit blake2b hash. Thanks for reply. |
Current implementation is metadata name is filename plus 3 preceding directory names -- but there is a sanity check to ensure the result has the correct length. I'm closing this but keeping a note to mention it in review |
Uh oh!
There was an error while loading. Please reload this page.
A design goal is to minimize the required client configuration. In practice I'm hoping I won't have to store package base directory ('packages/' on files.pythonhosted.org) in the configuration.
The plan is to integrate tuf into pip in a place where we get a Link object which contains among other things the full url of the file to be downloaded and helper properties for parsing it. The issue is how to extract the TUF metadata name from the url?
Example URL:
We want to extract
The text was updated successfully, but these errors were encountered: