-
Notifications
You must be signed in to change notification settings - Fork 710
no global packages auto written to env files #8607
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I mostly agree with you that it has been a mistake (especially in terms of how it was implemented), but the rationale is fairly important. I think this is worth trying, though, and if this works well enough, I think my fears were misplaced. If it doesn't, the solution is to remove the bundled library list before solving with the packages to be installed, solve, and then re-add the bundled libraries if they're not in the transitive closure of the finalized plan. |
Thanks to both of you for thoughtful explanation of what is going on.
This objective lead to a disastrous design in my opinion. I think time has long come to stop trying to emulate v1-install and try a cleaner design. In particular, I think no one should expect
I agree with everything in the OP except this. I am pretty sure it's very easy to break the global environment with multiple independent installs — something that novices often do. This is the reason I argued for forbidding this scenario altogether in #8483, which ultimately failed to find support. The current PR is a more moderate approach, and a step in the right direction. |
I think it needs more testing (please do!) but the env files created, despite not enumerating base, etc I think nonetheless will provide everything in the global package db regardless, because its sort of unavoidably there!
I think that most breaks were due to this behavior, which made things almost impossible not to break. Without this behavior, I suspect breaks will be relatively rare -- we'll need more testing in different situations to check, but it could well suffice. speaking of which @yaxu I don't want to get ahead of myself and overpromise, but care to give this a spin and see if it helps your use case? |
Thanks so much for your work on this @gbaz!
I don't think I'm a novice, I develop one of the more popular haskell libraries. I think being able install a library for use with ghci, and later upgrade it, should be a first class, well supported use case.
I just tried removing That didn't work: The env file is pleasingly short with the tidal-1.9.2 constraint, but it would be nice to be able to upgrade that version to the one I'm explicitly installing in this case, preferably without adding a I also tried this with cabal-install 3.8.1.0, and the same happened (but with the longer package.db). I can believe that this change would help in a lot of cases, though. |
@yaxu thanks for checking! Reinstalls (including version upgrades) are not a part of this PR. But your desire support my point above: people do want reinstalls and the current behavior is confusing. The error message should explicitly say that updating environment files is not supported (as proposed in #8483), and not fail with a solver error: it doesn't make any sense to try to solve such requests currently. |
@ulysses4ever so I understand this a bit better, could you clarify the difference between 'multiple independent installs' and 'reinstalls' please? edit Actually reinstalls are mentioned in the PR description.. are you sure this isn't about reinstalls? .. and if changing a working environment isn't supported, what are we supposed to do instead? Make a new, named environment each time anything changes, updating our configurations to use the latest name? |
@yaxu thanks for the input! So multiple independent installs means that you can install X and then Y and then Z afaik, which should be fine with this patch. Reinstalls means that you can install X and then X again at a different version (or the same version but a different hash if you're working from a localdir/repo). The latter is what you tried and that this indeed doesn't improve. When you tried different versions it at least gave an error -- with a single version but a different hash, I tested and its even worse -- it adds it fine, but it adds rather than replaces (so we get ambiguity when trying to use it, and both fail) The problem is deleting from an env file is not necessarily safe since removing one package could have knock-on affects on things which link against it. So appending is fine, but clobbering an existing entry with a new one is not necessarily so, and requires design work. I think for now we could detect a requested reinstall and give a better error message at the least, although such a message would probably suggest either deleting the existing env or creating a new one instead. (This would be necessary if package X was installed and there is a request to install it again, but not triggered if X was installed and then Y, which was compatible with X, was requested to be installed). |
for now, I think the goal should be it may not always be possible to extend an environment, but if extending does succeed, the extended environment should not be broken (i.e. contain packages with conflicting bounds) and that is basically what this patch addresses. |
Yes I've seen the same package-with-different-hash-added problem a lot. In practice, a lot of people have very simple requirements - install a package, update it to a new version, and might well never need to install any other package. I'd generally put myself in that category, actually --v1-install has regularly given me dire warnings that reinstalling some package is dangerous and almost certain to break everything, but adding a --force-reinstalls or whatever has always been completely fine. I'm just at a bit of a loss at the idea that a reinstall/upgrade isn't being supported behaviour? I think I'm either missing something major, or as cabal maintainers you just have a very different idea about how people should/must work with software to me. |
I'm not saying it shouldn't be supported! I'm not making a normative claim, merely a descriptive one. I'm saying that it currently doesn't work right, and I don't think we can target a sound solution for this release. I suppose a (otoh the ticket linked by @ulysses4ever does advocate "we should give up on getting it right, and instead just say we won't handle it" -- but note that ticket met opposition and was closed!) |
Ok I updated the PR to fail with a message if there's a reinstall, and to suggest the --force-reinstalls flag, which now works, and cleans out previous entries for the reinstalled target in the env file. Further, I fixed things so that already installed packages are "solved" with entries available from the existing installed-package-index, so that if you install "foo" locally, then install "bar" from hackage, it doesn't yell about not being able to solve for foo. |
Yes. Though, I don't use named ones, just:
Note that environments (little files created by
*) that is modulo some known limitations of |
Aha thanks! That's a bit unwieldy though. It would be great if cabal could both take care of removing/replacing existing environment files, and take care of remembering the space-separated list of libraries. Or is that what I'm supporting people who are unfamiliar with commandlines, including windows users, so having something like the above in a standard workflow isn't really feasible. (I'd also love it if the |
@yaxu understood. They're only two ways currently implemented:
if you want something else, I don't have good news for you. You can either wait (possibly indefinitely) for something else to emerge, or pick your poison from the list above... |
Thanks, I don't really want to be manually deleting files all the time though and definitely don't want to be supporting others doing that. Up to now I've been sticking with |
I would like to merge this basically as is (perhaps with more docs, tests, etc). However, I also want to record here an idea that emerged for future work based on an irc conversation: Currently, we solve with a plan including as targets everything in the existing env file as a "resolved" target -- i.e. pinned to a particular hash already installed -- and also of course the things specified on the command line as "unresolved" targets. However, we only record in the env file the immediate things being installed, and not the resolved transitive deps. This means that a diamond deps conflict is possible if we install A depending on B-1, then later install C depending on B-2. One idea would be that when we solve, we also "unresolve" everything in the existing env file. So this way extending the env file may cause more reinstalls, but guarantees consistency against all things being installed, and prevents the diamond dependencies that can otherwise arise -- i.e. we would in the above scenario now need to, if possible reinstall A depending on B-2, or failing that, error appropriately. |
As the relevant code owner, I'm good to merge this as it currently sits. |
changelog added, ready to merge, waiting on one more approval |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Ready for a mergeme label. |
@@ -1729,6 +1729,7 @@ planProject testdir cliConfig = do | |||
distDirLayout cabalDirLayout | |||
projectConfig | |||
localPackages | |||
Nothing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This tests the new behaviour or it was edded to make it compile?
Afaiu a test checking the env file expected content would be needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree adding a test would be good. This indeed just makes it compile.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
However -- a regression test against this behavior is not the test needed. The behavior was something complicated that was removed, its not just going to magically reappear. What we need are some standard integration tests for "cabal v2-install" at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah i was thinking in generating a env file with the version of this pr and test against a golden file (or regexp-like checks testing the relevant parts)
And well regression test goal is just that, ensure the behaviour, well, does not regress, magically or by accident 😄
Resolves #6165
Resolves #5559
This is a rather brutal solution that I think should just make env files work better. I'm not sure why this approach was not always taken. Would particularly appreciate feedback from @fgaz and @typedrat.
I only understood this issue once I started digging in the code carefully. It turns out that from the start of the cabal v2-install command, not only the packages we explicitly install are written to the env file, but also every "global" package, which is a hardcoded list of packages that roughly matches the global package database. In fact #5559 makes reference at the beginning of the ticket to this behavior ("Because new-install wants to keep every package that GHC bundles in the environment") but I didn't realize the gravity of this sentence, and I never understood how and why this was implemented.
This I think has been a terrible decision. It means that almost all "reinstalls" into an existing env file will be bound to fail because the ghc package will be in the env file, which pins directory, process, etc. But meanwhile, directory, process etc may in fact be already pinned differently due to the solver using newer ones! So we build an env file on one set of constraints, but once we add "all global packages" we get an env file that is innately, in itself, unsolvable.
That's the cause of all the frustration and mysterious behavior over the years. I strongly suspect that if we just stop doing that, then everything works better, and in my preliminary tests it appears so.
This certainly could use a fair amount more thought and feedback from others, as well as some testing -- give it a shot!