Skip to content

Re-enable target-based dependency resolution #3121

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

neonichu
Copy link
Contributor

Looks like we were not necessarily fetching dependencies of packages with pre-5.2 tools versions during resolution. It seems correct to not consider the product filter during dependency resolution for pre-5.2 packages, since it won't be correct for them by definition.

rdar://70633425

Looks like we were not necessarily fetching dependencies of packages with pre-5.2 tools versions during resolution. It seems correct to not consider the product filter during dependency resolution for pre-5.2 packages, since it won't be correct for them by definition.

rdar://70633425
@neonichu neonichu marked this pull request as draft December 16, 2020 22:19
@neonichu
Copy link
Contributor Author

@swift-ci please smoke test

@@ -222,6 +222,7 @@ public class RepositoryPackageContainer: PackageContainer, CustomStringConvertib
productFilter: ProductFilter
) throws -> (Manifest, [Constraint]) {
let manifest = try self.loadManifest(at: revision, version: version)
let productFilter = manifest.toolsVersion < ToolsVersion.v5_2 ? ProductFilter.everything : productFilter
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit of a blunt instrument, but it should yield correct results.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This single line fixes the top secret errors that were reported?

I’m curious if the errors occur generally during fresh resolution, or if they only occur in the presence of a pins file?

I ask because the way I implemented the filtering, manifests < 5.2 asked for every dependency product from every package. Essentially that meant any package that might contain the intended dependency would be resolved. The intent was that if a 5.1 manifest depends on a package that later gets a minor version bump with a 5.2 manifest, that transitive 5.2 manifest knows which of its products were actually requested and can therefore meaningfully filter its own dependencies. What occurs to me know is that had a side effect of culling some dependencies in a way that is logically valid, but not stable compared to previous treatment of 5.2 manifests.

For example, consider the case of a 5.2 manifest which declares a slew of package dependencies, but its actual targets declare no dependencies whatsoever. Under a 5.1 toolchain, all those dependencies were resolved. Under the initial filtering algorithm, all those dependencies were ignored. Ignoring them is valid as far as the build process is concerned. However, because it produces different pins, it introduces an instability where old and new toolchains would each be insisting that the other’s pins are invalid.

Can you look into it to see if this seems to be the real problem, or at least if it is the deciding factor that channels execution into the problematic code branch. If so, then the correct solution is to change the handling of pre‐5.2 manifests to request absolutely everything absolutely all the time.

Such a fix would be done most cleanly here, right where it originates, by injecting something like this:

if toolsVersion < ToolsVersion.v5_2 {
  return dependencies.map { $0.filtered(by: .everything) }
}

From that simple change, everything else should cascade back into place such that manifests from 5.1 and earlier are always and everywhere treated exactly as they were by the 5.1 toolchain. It would mean some graph pruning opportunities would missed, but it would ensure pin stability. Those missed optimizations would only effect old manifests, which presumably will become less and less common over time anyway. (5.2 manifests and above would still be properly skipping everything that isn’t used.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue I have been seeing is that we can miss dependencies of 5.1 and earlier packages that are actually required, the scenario is the same package being required by multiple different packages transitively. I need to dig in a bit more, I put this up more as a straw man so that we have some solution to reactivate target-based dependency resolution that we can land in 5.4.

Where I am at right now, I'm seeing that we don't download a few dependencies as part of resolution which leads us to not being able to include them later on here. This piece of code in Workspace is kind of subtle, because unless we have previously fetched a given package, we'll just fail and ignore it. I'll report back once I have more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to dig in a bit more, I put this up more as a straw man so that we have some solution to reactivate target-based dependency resolution that we can land in 5.4.

Yes, that makes sense.

I still think moving the workaround to the Manifest type like I suggested would be more consistent. I don’t think RepositoryPackageContainer should alter the resolution strategy, as it won’t apply to other conformers of the PackageContainer protocol.

the scenario is the same package being required by multiple different packages transitively

That sounds like there is another place in need of a similar fix to this part of #3006. If a topologicalSort or similar graph traversal uses a node definition of just the package—instead of the package–filter pair—then it would think it had already visited that node and neglect to look at its dependencies even if the filter is different the second time the node shows up. The result would be exactly the symptoms you describe.

@neonichu
Copy link
Contributor Author

WorkspaceTests.testLocalDependencyTransitive and WorkspaceTests.testMinimumRequiredToolsVersionInDependencyResolution are failing.

@neonichu
Copy link
Contributor Author

I think both of these tests simply need updating, they were partially disabled as part of disabling target-based dependency resolution and therefore weren't updated as part of the pubgrub diagnostics changes.

@neonichu
Copy link
Contributor Author

@swift-ci please smoke test

@neonichu
Copy link
Contributor Author

/Users/buildnode/jenkins/workspace/swift-package-manager-with-xcode-self-hosted-PR-osx/branch-main/swiftpm/IntegrationTests/Tests/IntegrationTests/XCBuildTests.swift:102: error: -[IntegrationTests.XCBuildTests testExecutableProducts] : failed - Command failed with exit code: terminated(code: 1)

@neonichu
Copy link
Contributor Author

neonichu commented Dec 17, 2020

I can reproduce the failures locally, but I don't understand them at all:

'/Users/neonacho/Projects/swiftpm-public/IntegrationTests/Fixtures/XCBuild/ExecutableProducts/Foo/.build/apple/ModuleCache.noindex/2NNZALOZFA99I', but the path is currently '/private/var/folders/2f/3dn8p1h535j778_1_3wnj6qh0000gn/T/XCBuild_ExecutableProducts.em1I7H/Foo/.build/apple/ModuleCache.noindex/2NNZALOZFA99I

It somewhat sounds as if we are switching derived data paths mid-build? I don't really understand how that could be caused by my changes.

@neonichu
Copy link
Contributor Author

I can solve this particular issue by setting an explicit MODULE_CACHE_DIR when generating PIF, but there are some more. I think there's one failure that could actually be related to the change, but otherwise this feels quite orthogonal.

Copy link
Contributor

@SDGGiesbrecht SDGGiesbrecht left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(My tabs got in a fight with GitHub and I cannot tell if my last post attempt succeeded or not. I apologize if this results in a duplicate comment.)

See the inline comment.

@SDGGiesbrecht
Copy link
Contributor

WorkspaceTests.testLocalDependencyTransitive and WorkspaceTests.testMinimumRequiredToolsVersionInDependencyResolution are failing.

I think both of these tests simply need updating, they were partially disabled as part of disabling target-based dependency resolution and therefore weren't updated as part of the pubgrub diagnostics changes.

I concur with that diagnosis.

I can solve this particular issue by setting an explicit MODULE_CACHE_DIR when generating PIF, but there are some more. I think there's one failure that could actually be related to the change, but otherwise this feels quite orthogonal.

I do not understand these other errors either. I think I will have time tomorrow to clone and take a closer look.

@neonichu
Copy link
Contributor Author

I do not understand these other errors either. I think I will have time tomorrow to clone and take a closer look.

Thanks for offering to help, but it seems like these issues are completely unrelated to target-based dependency resolution. Feel free to have a look anyway, of course :)

@friedbunny
Copy link
Member

@swift-ci please smoke test macos self hosted

@SDGGiesbrecht
Copy link
Contributor

Based on your description, I attempted to reproduce it. I created a simple skeleton 4‐node diamond where all packages have a 5.1 manifest. When the top package has no Package.resolved, it builds successfully. But when it has a Package.resolved from the 5.3 toolchain, then it errors:

error: the Package.resolved file is most likely severely out-of-date and is preventing correct resolution; delete the resolved file and try again

Does this sound like what you are seeing?

I am attempting to debug it right now.

@SDGGiesbrecht
Copy link
Contributor

Ignore my last comment. 🤦‍♂️ I had generated the legacy pins by running swift build inside the fixture, but the test code was copying the entire directory structure elsewhere. Hence the pins and manifest disagreed about the various packages’ URLs and the error was completely unrelated. Once I fixed that, the entire diamond compiled correctly, which means I actually haven’t been able to reproduce whatever bug you are trying to fix.

@neonichu
Copy link
Contributor Author

neonichu commented Dec 17, 2020

I'm seeing the same symptom, but it happens with or without a Package.resolved file. In my case some required dependencies are genuinely missing which leads to the same error message. I'm thinking we might also want to diagnose this kind of issue in a better way, e.g. by erroring at the point we notice that a require remote package has never been fetched (e.g. here). The project is fairly complex and I haven't been able to distill it down, yet, so I might still be missing a crucial piece that's required to reproduce this.

@SDGGiesbrecht
Copy link
Contributor

Yes, I believe you. It is just that as someone on the outside without access to the project that triggers it, I am left trying to debug a black box whose execution flow I cannot trace. I try to think through the possibilities and be as helpful as I can based on what I can glean from your comments, but I have run out of ideas for the moment.

I'm thinking we might also want to diagnose this kind of issue in a better way, e.g. by erroring at the point we notice that a require remote package has never been fetched (e.g. here).

If you think it would be helpful. Are there other, legitimate ways that could flag, or does this state unquestionably indicate a programmer error?

@neonichu
Copy link
Contributor Author

Is it actually correct that we unconditionally return nil here? It seems like there are cases where dependency.productFilter is .everything but we end up in that branch.

@SDGGiesbrecht
Copy link
Contributor

SDGGiesbrecht commented Dec 18, 2020

I think that line is correct. That method is below the level where the product filter is applied. The intent is that it simply answers the question, “In order to build target list x, which dependencies would(/might) I need?”

  • The first branch of the if handles declared relationships in 5.2 and up, as well as inferred possible pre‐5.2 relationships (corralled there by the previous few lines).
  • The second branch handles immediate dependencies of the top‐level package, overridden to be included even if unreferenced, (I think so that their executables can still be located at development time, but my memory of the reasoning has faded and is a little fuzzy).
  • The last branch handles the case where there is a package dependency, but no explicit relationships to it as a target dependency, and no inferred target dependencies exist.

If an actually required dependency is landing in that third branch, then the first question is whether it is declared explicitly or not?

  • If it is explicit, then there must be an error somewhere in register that is somehow dropping it so that the lookup in the condition for the first branch is unexpectedly failing.
  • If it is implicit (I think everything pre‐5.2 is necessarily implicit), then it could either be the same problem in the register method, or additionally it could be something wrong with the few lines right above the if statement where the inference possibilities are enumerated.

@SDGGiesbrecht
Copy link
Contributor

SDGGiesbrecht commented Dec 18, 2020

Here is another stab in the dark (ignore it if your insider information suggests it is way off):

Was it originally possible to have a target and a dependency’s product share the same name, and then the by‐name target dependency successfully pointed at both at once? i.e. given...

// ...
  dependencies: [
    .package(url: "somewhere.com/SameString", /* ... */)
    // (↑ Has a “SameString” product with a “Dependency” target.)
  ],
  targets: [
    .target(name: "SameString"),
    .target(name: "Target", dependencies: ["SameString"])
  ]
// ...

...was Target successfully able to do this?

import Dependency // *
import SameString

If that used to work, then the assumption underpinning this line is faulty. That is a plausible hypothesis for how a corner‐case dependency could end up dropped.


The only other code paths that break without registering anything are here and here. The first should only apply to targets (not dependencies) and the second should only apply to 5.2 manifests and higher.

@neonichu
Copy link
Contributor Author

It appears that I was wrong about the concrete scenario and that is because it is actually non-deterministic which required packages we are dropping. That also explains why I have been having a hard time debugging this since I tried to zero in on a specific one of them, but that didn't work because of the non-determinism.

One source of non-determinism in the resolution process that I noticed is here. Since each DependencyResolutionNode is associated with a single product, we can have multiple with the same version constraints and different products. I don't think the sorting is deterministic in that case.

@SDGGiesbrecht
Copy link
Contributor

I concur.

@SDGGiesbrecht
Copy link
Contributor

Something like the following would at least be deterministic (instead of counts[$0]! < counts[$1]!):

{
    (counts[$0]!, $0.node.package.name, $0.node.specificProduct ?? "")
  < (counts[$1]!, $1.node.package.name, $1.node.specificProduct ?? "")
}

@neonichu
Copy link
Contributor Author

This seems problematic, it means we will potentially skip creating incompatibilities for certain product filters and in conjunction with the non-determinism in picking which product filter comes "first", it seems non-deterministic which ones we are skipping.

Changing this to be aware of product filters indeed helps, but I'm wondering if that's actually necessary. Presumably, we will avoid creating all these additional DependencyResolutionNode for every possible product filter when a given package uses tools-version 5.2 and later since we exactly know which package declares a certain product.

So possibly using .everything instead of concrete lists of products is the way to go for older tools-versions.

@neonichu
Copy link
Contributor Author

Regarding the integration test failures, there were actually two separate issues. The module cache one I was seeing locally happens if there's a leftover .build directory from running swift build manually and it's unrelated to the CI failure.

The CI failure is actually the --target case discussed in the original PR #2749, so I think it's safe to remove that part of the test case.

…y resolution

As discussed in the original PR swiftlang#2749, we no longer support building arbitrary targets using `--target`, but this test was relying on it. We removed a similar test from the unit tests and should do the same with this one.
@neonichu
Copy link
Contributor Author

@swift-ci please smoke test

@SDGGiesbrecht
Copy link
Contributor

This seems problematic, it means we will potentially skip creating incompatibilities for certain product filters and in conjunction with the non-determinism in picking which product filter comes "first", it seems non-deterministic which ones we are skipping.

Probably. I’ll take a closer look in a minute.

When I did the implementation, I swapped in the new node type and made sure its Equatable and Hashable conformances were accurate. At the time tests passed and everything I tried it on worked. But you have now found two places where the calling code bypassed both and operated on an assumption that packages are unique. Neither code branch appears to be confined to pre‐5.2 packages, so I suspect switching to .everything will only patch the immediate package and not the general problem.

I plan on re‐auditing PubGrub’s actual use of the nodes right away.

@neonichu
Copy link
Contributor Author

Neither code branch appears to be confined to pre‐5.2 packages, so I suspect switching to .everything will only patch the immediate package and not the general problem.

Yep, that's true. What I was thinking was that in the case of 5.2 packages, we should end up with only one DependencyResolutionNode per package here, since we know the exact product, but I guess that is not actually correct because there could be multiple products required from the same package.

@SDGGiesbrecht
Copy link
Contributor

SDGGiesbrecht commented Dec 18, 2020

I did a project‐wide search for “DependencyResolutionNode” and inspected each function that referenced it in its signature. I made the following adjustments and just launched the test suite locally. Maybe you can try them on the actual problematic package while I wait for the result?

diff --git a/Sources/PackageGraph/Pubgrub/PubgrubDependencyResolver.swift b/Sources/PackageGraph/Pubgrub/PubgrubDependencyResolver.swift
index 42e393b0..8e6ac471 100644
--- a/Sources/PackageGraph/Pubgrub/PubgrubDependencyResolver.swift
+++ b/Sources/PackageGraph/Pubgrub/PubgrubDependencyResolver.swift
@@ -653,7 +653,12 @@ public struct PubgrubDependencyResolver {
             do {
                 let counts = try result.get()
                 // forced unwraps safe since we are testing for count and errors above
-                let pkgTerm = undecided.min { counts[$0]! < counts[$1]! }!
+                let pkgTerm = undecided.min {
+                  // Only the count is relevant to short‐circuiting, but the others ensure determinism.
+                  // (If SwiftPM doesn’t always pick the same graph branch, the diagnostics might change with each invocation.)
+                  (counts[$0]!, $0.node.package.identity, $0.node.specificProduct ?? "")
+                    < (counts[$1]!, $1.node.package.identity, $1.node.specificProduct ?? "")
+                }!
                 // at this point the container is cached 
                 let container = try self.provider.getCachedContainer(for: pkgTerm.node.package)
                 // Get the best available version for this package.
@@ -1071,10 +1076,10 @@ private final class PubGrubPackageContainer {
 
     /// The map of dependencies to version set that indicates the versions that have had their
     /// incompatibilities emitted.
-    private var emittedIncompatibilities = ThreadSafeKeyValueStore<PackageReference, VersionSetSpecifier>()
+    private var emittedIncompatibilities = ThreadSafeKeyValueStore<DependencyResolutionNode, VersionSetSpecifier>()
 
     /// Whether we've emitted the incompatibilities for the pinned versions.
-    private var emittedPinnedVersionIncompatibilities = ThreadSafeBox(false)
+    private var emittedPinnedVersionIncompatibilities = ThreadSafeKeyValueStore<ProductFilter, Bool>()
 
     init(underlying: PackageContainer, pinsMap: PinsStore.PinsMap, queue: DispatchQueue) {
         self.underlying = underlying
@@ -1221,20 +1226,22 @@ private final class PubGrubPackageContainer {
 
             // Skip if we already emitted incompatibilities for this dependency such that the selected
             // falls within the previously computed bounds.
-            if emittedIncompatibilities[dep.identifier]?.contains(version) != true {
+            for node in dep.nodes() {
+            if emittedIncompatibilities[node]?.contains(version) != true {
                 constraints.append(dep)
             }
+            }
         }
 
         // Emit the dependencies at the pinned version if we haven't emitted anything else yet.
         if version == pinnedVersion, emittedIncompatibilities.isEmpty {
             // We don't need to emit anything if we already emitted the incompatibilities at the
             // pinned version.
-            if self.emittedPinnedVersionIncompatibilities.get() ?? false {
+            if self.emittedPinnedVersionIncompatibilities.contains(node.productFilter) {
                 return []
             }
 
-            self.emittedPinnedVersionIncompatibilities.put(true)
+            self.emittedPinnedVersionIncompatibilities.memoize(node.productFilter, body: { true })
 
             // Since the pinned version is most likely to succeed, we don't compute bounds for its
             // incompatibilities.
@@ -1256,14 +1263,10 @@ private final class PubGrubPackageContainer {
         let (lowerBounds, upperBounds) = try self.computeBounds(for: node,
                                                                 constraints: constraints,
                                                                 startingWith: version,
-                                                                products: node.productFilter,
                                                                 timeout: computeBoundsTimeout)
 
         return try constraints.map { constraint in
             var terms: OrderedSet<Term> = []
-            let lowerBound = lowerBounds[constraint.identifier] ?? "0.0.0"
-            let upperBound = upperBounds[constraint.identifier] ?? Version(version.major + 1, 0, 0)
-            assert(lowerBound < upperBound)
 
             // We only have version-based requirements at this point.
             guard case .versionSet(let vs) = constraint.requirement else {
@@ -1271,12 +1274,16 @@ private final class PubGrubPackageContainer {
             }
 
             for constraintNode in constraint.nodes() {
+                let lowerBound = lowerBounds[node] ?? "0.0.0"
+                let upperBound = upperBounds[node] ?? Version(version.major + 1, 0, 0)
+                assert(lowerBound < upperBound)
+
                 let requirement: VersionSetSpecifier = .range(lowerBound ..< upperBound)
                 terms.append(Term(node, requirement))
                 terms.append(Term(not: constraintNode, vs))
 
                 // Make a record for this dependency so we don't have to recompute the bounds when the selected version falls within the bounds.
-                self.emittedIncompatibilities[constraint.identifier] = requirement.union(emittedIncompatibilities[constraint.identifier] ?? .empty)
+                self.emittedIncompatibilities[constraintNode] = requirement.union(emittedIncompatibilities[constraintNode] ?? .empty)
             }
 
             return try Incompatibility(terms, root: root, cause: .dependency(node: node))
@@ -1293,9 +1300,8 @@ private final class PubGrubPackageContainer {
         for node: DependencyResolutionNode,
         constraints: [PackageContainerConstraint],
         startingWith firstVersion: Version,
-        products: ProductFilter,
         timeout: DispatchTimeInterval
-    ) throws -> (lowerBounds: [PackageReference: Version], upperBounds: [PackageReference: Version]) {
+    ) throws -> (lowerBounds: [DependencyResolutionNode: Version], upperBounds: [DependencyResolutionNode: Version]) {
         let preloadCount = 3
 
         // nothing to do
@@ -1308,7 +1314,7 @@ private final class PubGrubPackageContainer {
             for version in versions {
                 self.queue.async(group: sync) {
                     if self.underlying.isToolsVersionCompatible(at: version) {
-                        _ = try? self.underlying.getDependencies(at: version, productFilter: products)
+                        _ = try? self.underlying.getDependencies(at: version, productFilter: .nothing)
                     }
                 }
             }
@@ -1316,8 +1322,8 @@ private final class PubGrubPackageContainer {
             _ = sync.wait(timeout: .now() + timeout)
         }
         
-        func compute(_ versions: [Version], upperBound: Bool) -> [PackageReference: Version] {
-            var result: [PackageReference: Version] = [:]
+        func compute(_ versions: [Version], upperBound: Bool) -> [DependencyResolutionNode: Version] {
+            var result: [DependencyResolutionNode: Version] = [:]
             var previousVersion = firstVersion
 
             for (index, version) in versions.enumerated() {
@@ -1334,21 +1340,23 @@ private final class PubGrubPackageContainer {
                 let bound = upperBound ? version : previousVersion
                 
                 let isToolsVersionCompatible = self.underlying.isToolsVersionCompatible(at: version)
-                for constraint in constraints where !result.keys.contains(constraint.identifier) {
+                for constraint in constraints {
+                for node in constraint.nodes() where !result.keys.contains(node) {
                     // If we hit a version which doesn't have a compatible tools version then that's the boundary.
                     // Record the bound if the tools version isn't compatible at the current version.
                     if !isToolsVersionCompatible {
-                        result[constraint.identifier] = bound
+                        result[node] = bound
                     } else {
                         // Get the dependencies at this version.
-                        if let currentDependencies = try? self.underlying.getDependencies(at: version, productFilter: products) {
+                        if let currentDependencies = try? self.underlying.getDependencies(at: version, productFilter: node.productFilter) {
                             // Record this version as the bound for our list of dependencies, if appropriate.
                             if currentDependencies.first(where: { $0.identifier == constraint.identifier }) != constraint {
-                                result[constraint.identifier] = bound
+                                result[node] = bound
                             }
                         }
                     }
                 }
+                }
 
                 // We're done if we found bounds for all of our dependencies.
                 if result.count == constraints.count {
@@ -1369,12 +1377,12 @@ private final class PubGrubPackageContainer {
 
         let sync = DispatchGroup()
 
-        var upperBounds = [PackageReference: Version]()
+        var upperBounds = [DependencyResolutionNode: Version]()
         self.queue.async(group: sync) {
             upperBounds = compute(Array(versions.dropFirst(idx + 1)), upperBound: true)
         }
 
-        var lowerBounds = [PackageReference: Version]()
+        var lowerBounds = [DependencyResolutionNode: Version]()
         self.queue.async(group: sync) {
             lowerBounds = compute(Array(versions.dropLast(versions.count - idx).reversed()), upperBound: false)
         }

Oops. In the process of pasting that I noticed that this part...

for node in dep.nodes() {
if emittedIncompatibilities[node]?.contains(version) != true {
    constraints.append(dep)
}
}

...really ought to be...

if dep.nodes().contains(where: { emittedIncompatibilities[$0]?.contains(version) != true }) {
    constraints.append(dep)
}

@SDGGiesbrecht
Copy link
Contributor

Nope. No dice. I broke something in the process.

@SDGGiesbrecht
Copy link
Contributor

SDGGiesbrecht commented Dec 18, 2020

Okay, take two passed the test suite locally (aside from two diagnostics whose order was reversed):

diff --git a/Sources/PackageGraph/Pubgrub/PubgrubDependencyResolver.swift b/Sources/PackageGraph/Pubgrub/PubgrubDependencyResolver.swift
index 42e393b0..8d6f7656 100644
--- a/Sources/PackageGraph/Pubgrub/PubgrubDependencyResolver.swift
+++ b/Sources/PackageGraph/Pubgrub/PubgrubDependencyResolver.swift
@@ -653,7 +653,12 @@ public struct PubgrubDependencyResolver {
             do {
                 let counts = try result.get()
                 // forced unwraps safe since we are testing for count and errors above
-                let pkgTerm = undecided.min { counts[$0]! < counts[$1]! }!
+                let pkgTerm = undecided.min {
+                  // Only the count is relevant to short‐circuiting, but the others ensure determinism.
+                  // (If SwiftPM doesn’t always pick the same graph branch, the diagnostics might change with each invocation.)
+                  (counts[$0]!, $0.node.package.identity, $0.node.specificProduct ?? "")
+                    < (counts[$1]!, $1.node.package.identity, $1.node.specificProduct ?? "")
+                }!
                 // at this point the container is cached 
                 let container = try self.provider.getCachedContainer(for: pkgTerm.node.package)
                 // Get the best available version for this package.
@@ -1071,10 +1076,10 @@ private final class PubGrubPackageContainer {
 
     /// The map of dependencies to version set that indicates the versions that have had their
     /// incompatibilities emitted.
-    private var emittedIncompatibilities = ThreadSafeKeyValueStore<PackageReference, VersionSetSpecifier>()
+    private var emittedIncompatibilities = ThreadSafeKeyValueStore<DependencyResolutionNode, VersionSetSpecifier>()
 
     /// Whether we've emitted the incompatibilities for the pinned versions.
-    private var emittedPinnedVersionIncompatibilities = ThreadSafeBox(false)
+    private var emittedPinnedVersionIncompatibilities = ThreadSafeKeyValueStore<ProductFilter, Bool>()
 
     init(underlying: PackageContainer, pinsMap: PinsStore.PinsMap, queue: DispatchQueue) {
         self.underlying = underlying
@@ -1221,7 +1226,7 @@ private final class PubGrubPackageContainer {
 
             // Skip if we already emitted incompatibilities for this dependency such that the selected
             // falls within the previously computed bounds.
-            if emittedIncompatibilities[dep.identifier]?.contains(version) != true {
+            if dep.nodes().contains(where: { emittedIncompatibilities[$0]?.contains(version) != true }) {
                 constraints.append(dep)
             }
         }
@@ -1230,11 +1235,11 @@ private final class PubGrubPackageContainer {
         if version == pinnedVersion, emittedIncompatibilities.isEmpty {
             // We don't need to emit anything if we already emitted the incompatibilities at the
             // pinned version.
-            if self.emittedPinnedVersionIncompatibilities.get() ?? false {
+            if self.emittedPinnedVersionIncompatibilities.contains(node.productFilter) {
                 return []
             }
 
-            self.emittedPinnedVersionIncompatibilities.put(true)
+            self.emittedPinnedVersionIncompatibilities.memoize(node.productFilter, body: { true })
 
             // Since the pinned version is most likely to succeed, we don't compute bounds for its
             // incompatibilities.
@@ -1276,7 +1281,7 @@ private final class PubGrubPackageContainer {
                 terms.append(Term(not: constraintNode, vs))
 
                 // Make a record for this dependency so we don't have to recompute the bounds when the selected version falls within the bounds.
-                self.emittedIncompatibilities[constraint.identifier] = requirement.union(emittedIncompatibilities[constraint.identifier] ?? .empty)
+                self.emittedIncompatibilities[constraintNode] = requirement.union(emittedIncompatibilities[constraintNode] ?? .empty)
             }
 
             return try Incompatibility(terms, root: root, cause: .dependency(node: node))

@SDGGiesbrecht
Copy link
Contributor

My brain is really having trouble parsing this line:

if dep.nodes().contains(where: { emittedIncompatibilities[$0]?.contains(version) != true }) {

There are too many logical negatives in the original condition and I find myself unconfident whether the new one needs to be contains or allSatisfy.

@neonichu
Copy link
Contributor Author

This is very similar to what I came up with, but it doesn't really work for my reproducer if there's no pre-existing resolved file. It'll trigger the "Two products in one package resolved to different versions" assertion.

Could/should we possibly change DependencyResolutionNode to reference a set of products as well instead of a single one? I'm not entirely sure what the design considerations were for having one node per referenced product.

@SDGGiesbrecht
Copy link
Contributor

SDGGiesbrecht commented Dec 19, 2020

It'll trigger the "Two products in one package resolved to different versions" assertion.

Are you speaking in terms of the first diff or the second one? For the first, that is the very error triggered by the test suite when I ran it. The second diff made it go away. (They are separate; don’t stack them.)

Could/should we possibly change DependencyResolutionNode to reference a set of products as well instead of a single one?

That had been my original idea long ago, but because of the set logic the resolver does on versions all over, it led to an explosion in complexity. (If a is the requirement of Package[ProductA, ProductB] at version 3..<8, what is ¬a and how do you represent it?)

@neonichu
Copy link
Contributor Author

I was only talking about the second diff, didn't even look at the first one at all since you had already posted the update. I'll dive a bit more into how the assertion is happening, it doesn't actually make a whole lot of sense to me for the particular case, since we are seemingly choosing a lower version of a dependency for no apparent reason. It might be related to how computeBounds is working which isn't entirely obvious to me, yet.

In any case, I think I need to spend some time to come up with a few actual unit tests here, it doesn't really scale to figure this out with the large project I got.

@SDGGiesbrecht
Copy link
Contributor

SDGGiesbrecht commented Dec 19, 2020

Starting with main and applying only the following diff, all tests pass except testCycle1 (which enters an infinite loop). That suggests to me that the emission memoization we are dealing with isn’t strictly necessary in order to resolve a valid package. Would you mind trying it like this on the problematic package? It would give us a definitive answer as to whether the root problem is related to these caches or not.

diff --git a/Sources/PackageGraph/Pubgrub/PubgrubDependencyResolver.swift b/Sources/PackageGraph/Pubgrub/PubgrubDependencyResolver.swift
index 42e393b0..292e48e7 100644
--- a/Sources/PackageGraph/Pubgrub/PubgrubDependencyResolver.swift
+++ b/Sources/PackageGraph/Pubgrub/PubgrubDependencyResolver.swift
@@ -1234,7 +1234,7 @@ private final class PubGrubPackageContainer {
                 return []
             }
 
-            self.emittedPinnedVersionIncompatibilities.put(true)
+            // self.emittedPinnedVersionIncompatibilities.put(true)
 
             // Since the pinned version is most likely to succeed, we don't compute bounds for its
             // incompatibilities.
@@ -1276,7 +1276,7 @@ private final class PubGrubPackageContainer {
                 terms.append(Term(not: constraintNode, vs))
 
                 // Make a record for this dependency so we don't have to recompute the bounds when the selected version falls within the bounds.
-                self.emittedIncompatibilities[constraint.identifier] = requirement.union(emittedIncompatibilities[constraint.identifier] ?? .empty)
+                //self.emittedIncompatibilities[constraint.identifier] = requirement.union(emittedIncompatibilities[constraint.identifier] ?? .empty)
             }
 
             return try Incompatibility(terms, root: root, cause: .dependency(node: node))

@SDGGiesbrecht
Copy link
Contributor

SDGGiesbrecht commented Dec 19, 2020

"Two products in one package resolved to different versions"

This is probably just a variation of the same problem. Each product node has a dependency on its own package at the same version, which is how separate products are prohibited from being split across versions. But if excessively broad cache keys are re‐returning mismatched dependency lists, then this error would result whenever the dropped node happened to be the synthesized one that forces the entire package to use a single version.

@SDGGiesbrecht
Copy link
Contributor

Alright, the following seems “most correct” to me, adjusts computeBounds to operate on nodes, and passes all tests locally.

It is almost identical to my very first diff posted in this thread, except that it renames one of the new loop variables so as not to inadvertently shadow an outer variable. That shadowing had originally resulted in a child node being passed to a function expecting a parent node. In turn that derailed everything and lead to the crash in my first round of testing that made me think I had horribly misunderstood something.

Let me know if it works on your top secret package.

diff --git a/Sources/PackageGraph/Pubgrub/PubgrubDependencyResolver.swift b/Sources/PackageGraph/Pubgrub/PubgrubDependencyResolver.swift
index 42e393b0..d017a36d 100644
--- a/Sources/PackageGraph/Pubgrub/PubgrubDependencyResolver.swift
+++ b/Sources/PackageGraph/Pubgrub/PubgrubDependencyResolver.swift
@@ -653,7 +653,12 @@ public struct PubgrubDependencyResolver {
             do {
                 let counts = try result.get()
                 // forced unwraps safe since we are testing for count and errors above
-                let pkgTerm = undecided.min { counts[$0]! < counts[$1]! }!
+                let pkgTerm = undecided.min {
+                    // Only the count is relevant to short‐circuiting, but the others ensure determinism.
+                    // (If SwiftPM doesn’t always pick the same graph branch, the diagnostics might change with each invocation.)
+                    (counts[$0]!, $0.node.package.identity, $0.node.specificProduct ?? "")
+                        < (counts[$1]!, $1.node.package.identity, $1.node.specificProduct ?? "")
+                }!
                 // at this point the container is cached 
                 let container = try self.provider.getCachedContainer(for: pkgTerm.node.package)
                 // Get the best available version for this package.
@@ -1071,10 +1076,10 @@ private final class PubGrubPackageContainer {
 
     /// The map of dependencies to version set that indicates the versions that have had their
     /// incompatibilities emitted.
-    private var emittedIncompatibilities = ThreadSafeKeyValueStore<PackageReference, VersionSetSpecifier>()
+    private var emittedIncompatibilities = ThreadSafeKeyValueStore<DependencyResolutionNode, VersionSetSpecifier>()
 
     /// Whether we've emitted the incompatibilities for the pinned versions.
-    private var emittedPinnedVersionIncompatibilities = ThreadSafeBox(false)
+    private var emittedPinnedVersionIncompatibilities = ThreadSafeKeyValueStore<ProductFilter, Bool>()
 
     init(underlying: PackageContainer, pinsMap: PinsStore.PinsMap, queue: DispatchQueue) {
         self.underlying = underlying
@@ -1221,7 +1226,7 @@ private final class PubGrubPackageContainer {
 
             // Skip if we already emitted incompatibilities for this dependency such that the selected
             // falls within the previously computed bounds.
-            if emittedIncompatibilities[dep.identifier]?.contains(version) != true {
+            if dep.nodes().contains(where: { emittedIncompatibilities[$0]?.contains(version) != true }) {
                 constraints.append(dep)
             }
         }
@@ -1230,11 +1235,11 @@ private final class PubGrubPackageContainer {
         if version == pinnedVersion, emittedIncompatibilities.isEmpty {
             // We don't need to emit anything if we already emitted the incompatibilities at the
             // pinned version.
-            if self.emittedPinnedVersionIncompatibilities.get() ?? false {
+            if self.emittedPinnedVersionIncompatibilities.contains(node.productFilter) {
                 return []
             }
 
-            self.emittedPinnedVersionIncompatibilities.put(true)
+            self.emittedPinnedVersionIncompatibilities.memoize(node.productFilter, body: { true })
 
             // Since the pinned version is most likely to succeed, we don't compute bounds for its
             // incompatibilities.
@@ -1256,14 +1261,10 @@ private final class PubGrubPackageContainer {
         let (lowerBounds, upperBounds) = try self.computeBounds(for: node,
                                                                 constraints: constraints,
                                                                 startingWith: version,
-                                                                products: node.productFilter,
                                                                 timeout: computeBoundsTimeout)
 
         return try constraints.map { constraint in
             var terms: OrderedSet<Term> = []
-            let lowerBound = lowerBounds[constraint.identifier] ?? "0.0.0"
-            let upperBound = upperBounds[constraint.identifier] ?? Version(version.major + 1, 0, 0)
-            assert(lowerBound < upperBound)
 
             // We only have version-based requirements at this point.
             guard case .versionSet(let vs) = constraint.requirement else {
@@ -1271,12 +1272,16 @@ private final class PubGrubPackageContainer {
             }
 
             for constraintNode in constraint.nodes() {
+                let lowerBound = lowerBounds[constraintNode] ?? "0.0.0"
+                let upperBound = upperBounds[constraintNode] ?? Version(version.major + 1, 0, 0)
+                assert(lowerBound < upperBound)
+
                 let requirement: VersionSetSpecifier = .range(lowerBound ..< upperBound)
                 terms.append(Term(node, requirement))
                 terms.append(Term(not: constraintNode, vs))
 
                 // Make a record for this dependency so we don't have to recompute the bounds when the selected version falls within the bounds.
-                self.emittedIncompatibilities[constraint.identifier] = requirement.union(emittedIncompatibilities[constraint.identifier] ?? .empty)
+                self.emittedIncompatibilities[constraintNode] = requirement.union(emittedIncompatibilities[constraintNode] ?? .empty)
             }
 
             return try Incompatibility(terms, root: root, cause: .dependency(node: node))
@@ -1293,9 +1298,8 @@ private final class PubGrubPackageContainer {
         for node: DependencyResolutionNode,
         constraints: [PackageContainerConstraint],
         startingWith firstVersion: Version,
-        products: ProductFilter,
         timeout: DispatchTimeInterval
-    ) throws -> (lowerBounds: [PackageReference: Version], upperBounds: [PackageReference: Version]) {
+    ) throws -> (lowerBounds: [DependencyResolutionNode: Version], upperBounds: [DependencyResolutionNode: Version]) {
         let preloadCount = 3
 
         // nothing to do
@@ -1308,7 +1312,7 @@ private final class PubGrubPackageContainer {
             for version in versions {
                 self.queue.async(group: sync) {
                     if self.underlying.isToolsVersionCompatible(at: version) {
-                        _ = try? self.underlying.getDependencies(at: version, productFilter: products)
+                        _ = try? self.underlying.getDependencies(at: version, productFilter: node.productFilter)
                     }
                 }
             }
@@ -1316,8 +1320,8 @@ private final class PubGrubPackageContainer {
             _ = sync.wait(timeout: .now() + timeout)
         }
         
-        func compute(_ versions: [Version], upperBound: Bool) -> [PackageReference: Version] {
-            var result: [PackageReference: Version] = [:]
+        func compute(_ versions: [Version], upperBound: Bool) -> [DependencyResolutionNode: Version] {
+            var result: [DependencyResolutionNode: Version] = [:]
             var previousVersion = firstVersion
 
             for (index, version) in versions.enumerated() {
@@ -1334,21 +1338,23 @@ private final class PubGrubPackageContainer {
                 let bound = upperBound ? version : previousVersion
                 
                 let isToolsVersionCompatible = self.underlying.isToolsVersionCompatible(at: version)
-                for constraint in constraints where !result.keys.contains(constraint.identifier) {
+                for constraint in constraints {
+                for childNode in constraint.nodes() where !result.keys.contains(node) {
                     // If we hit a version which doesn't have a compatible tools version then that's the boundary.
                     // Record the bound if the tools version isn't compatible at the current version.
                     if !isToolsVersionCompatible {
-                        result[constraint.identifier] = bound
+                        result[childNode] = bound
                     } else {
                         // Get the dependencies at this version.
-                        if let currentDependencies = try? self.underlying.getDependencies(at: version, productFilter: products) {
+                        if let currentDependencies = try? self.underlying.getDependencies(at: version, productFilter: node.productFilter) {
                             // Record this version as the bound for our list of dependencies, if appropriate.
                             if currentDependencies.first(where: { $0.identifier == constraint.identifier }) != constraint {
-                                result[constraint.identifier] = bound
+                                result[childNode] = bound
                             }
                         }
                     }
                 }
+                }
 
                 // We're done if we found bounds for all of our dependencies.
                 if result.count == constraints.count {
@@ -1369,12 +1375,12 @@ private final class PubGrubPackageContainer {
 
         let sync = DispatchGroup()
 
-        var upperBounds = [PackageReference: Version]()
+        var upperBounds = [DependencyResolutionNode: Version]()
         self.queue.async(group: sync) {
             upperBounds = compute(Array(versions.dropFirst(idx + 1)), upperBound: true)
         }
 
-        var lowerBounds = [PackageReference: Version]()
+        var lowerBounds = [DependencyResolutionNode: Version]()
         self.queue.async(group: sync) {
             lowerBounds = compute(Array(versions.dropLast(versions.count - idx).reversed()), upperBound: false)
         }

@neonichu
Copy link
Contributor Author

New patch seems to exhibit the same behavior as the previous one for me. I'll dig in a bit more today.

@neonichu
Copy link
Contributor Author

neonichu commented Jan 6, 2021

Closing this for now in favor of #3162

@neonichu neonichu closed this Jan 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants