-
Notifications
You must be signed in to change notification settings - Fork 3
Isolate docker build environments #65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
logger.error("Removing team {} as their containers did not build successfully.".format(team.name)) | ||
self.team_names.remove(team.name) | ||
if command[-2].startswith("generator") and not unsafe_build: | ||
image_archives.pop().unlink() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand it correctly, we may raise an IndexError here if the very first generator that is being built is not built successfully, as image_archives
is still empty at this point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
image_archives doesn't just contain the paths to generator archives but also solvers, since they are being built first there never will be a generator that is being built first. the intention is to clean up the solver of the team whose generator failed, not to remove another generator.
Thank you for the pull request, could you please look into the possible exception I commented on and reduce the complexity of the |
14ad282
to
2a433e5
Compare
idk why github isnt running the checks but the linting issues are gone |
I had to remove some type hinting as python 3.6 does not implement all of them yet. I like the new option of an Testing shows that we indeed have a performance hit during execution, but the Thank you for your work. I will publish this PR together with the encoding fix as a new Subversion. |
we essentially already do that, if there already is an image build from that file in the cache docker won't actually build a new one, without the new safe build the battle startup time is pretty marginal already. |
this addresses #64 by isolating the build environments of each image from each other. To do this I archive the newly created image to disk. This comes at a significant performance hit so I also introduced the
--unsafe_build
CLI flag to have quicker local debugging runs.