-
-
Notifications
You must be signed in to change notification settings - Fork 20
Improve build times by caching layers, when possible #44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve build times by caching layers, when possible #44
Conversation
By using `ghcr.io/laminas/laminas-continuous-integration-action:build-cache` as a registry-based docker image layer cache, we can avoid re-building any intermediate docker image layers for which an exact SHA256 match exists. This will dramatically speed up builds that happen to change little within the `Dockerfile` (or any related sources), and it will reduce any network I/O failures caused by upstream package manager hiccups. Note that random users cannot just push to the cache: active caching can only happen when operating within the boundaries of the `laminas/*` organization (direct repository pushes, operations performed by maintainers).
In a previous commit, we introduced `cache-from` and `cache-to`, which lead to re-using docker image layers as much as possible. This is fine, but it also means that, on the long term, we may run into stale base layers, where dependencies do not get updated when the system shifts dramatically. To prevent that, we do a full re-build when the base image changes: it's an acceptable cost, for something that changes less frequently, while we still retain caching advantages when multiple pushes are performed close to each other.
Hmm, I'm unsure how to test this one: it seems to me that the on:
release:
types: [published] @weierophinney how do we verify this change? 🤔 |
In fact, I think we should test the The idea would be to have |
We can add a workflow for that - it would be identical to the build-and-push-containers workflow, but with |
Yeah, didn't figure out how to do it without completely replicating the workflow file (which I'd like to avoid) |
Add the pull_request event to the workflow. For the |
0482c53
to
ae74bc0
Compare
b2c982d
to
1bb112d
Compare
This change: - adds "pull_request" as an event that will trigger the workflow. - marks the "docker login" step as conditional, only to run if the "container_username" secret is present. - alters the original "Build and push" step such that it only runs on a release - adds two new "build and push" steps - one runs on pull_request if a container_username secret is present, and caches layers - one runs on pull_request if a container_username secret is NOT present, and DOES NOT cache layers Signed-off-by: Matthew Weier O'Phinney <[email protected]>
1bb112d
to
a6f9b54
Compare
@Ocramius figured out how to make this happen. Interestingly... it will be better in the future for TSC members to submit PRs as branches pushed directly to the repo, as otherwise repository secrets are not injected in the workflow, and we thus have no credentials for pushing to the registry.... and thus no ability to cache layers. (Release will always cache layers, however.) Pushing a change to the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚢
…nical repo Signed-off-by: Matthew Weier O'Phinney <[email protected]>
Description
By using
ghcr.io/laminas/laminas-continuous-integration-action:build-cache
as a registry-baseddocker image layer cache, we can avoid re-building any intermediate docker image layers for which
an exact SHA256 match exists.
This will dramatically speed up builds that happen to change little within the
Dockerfile
(orany related sources), and it will reduce any network I/O failures caused by upstream package manager
hiccups.
Note that random users cannot just push to the cache: active caching can only happen when operating
within the boundaries of the
laminas/*
organization (direct repository pushes, operations performedby maintainers).
To prevent stale images due to caching, we do a full re-build when the base image changes: it's an acceptable
cost, for something that changes less frequently, while we still retain caching advantages
when multiple pushes are performed close to each other.