Skip to content

Prefer modern UseDotNet task over obsolete DotNetCoreInstaller #427

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 27, 2019

Conversation

AArnott
Copy link
Contributor

@AArnott AArnott commented May 27, 2019

Prefer modern UseDotNet task over obsolete DotNetCoreInstaller

The UseDotNet task is newer and follows best practices from the
dotnet SDK team better than the older DotNetCoreInstaller task.

One significant difference between the two is that UseDotNet sets
DOTNET_MULTILEVEL_LOOKUP=0, such that once you use this task once, you have to
manually install all SDK/runtime versions that your pipeline requires.
This task may be used more than once in order to accomplish this.
This is actually a good thing, because it means your pipeline is more fully
self-describing, and less dependent on whatever versions the agents happen to
have installed at the time.

I do wonder what @MarcoRossignoli meant when he said in #425:

this update does not conflicts with #396 because there we use DOTNET_MULTILEVEL_LOOKUP=0

Since here we do set that, what does that mean for #396?

The `UseDotNet` task is newer and follows best practices from the
dotnet SDK team better than the older `DotNetCoreInstaller` task.

One significant difference between the two is that `UseDotNet` sets
`DOTNET_MULTILEVEL_LOOKUP=0`, such that once you use this task once, you have to
manually install *all* SDK/runtime versions that your pipeline requires.
This task *may* be used more than once in order to accomplish this.
This is actually a *good* thing, because it means your pipeline is more fully
self-describing, and less dependent on whatever versions the agents happen to
have installed at the time.
@MarcoRossignoli
Copy link
Collaborator

MarcoRossignoli commented May 27, 2019

I do wonder what @MarcoRossignoli meant when he said in #425:

Sorry @AArnott I try to explain better, I wasn't clear enough(my english is not so good sorry again).
Here we have to handle 2 scenario

  1. AZP CI: to accomplish this it's ok to me use UseDotNet and yaml pipelines because as you said all will be fully self-describing, and less dependent on whatever versions the agents happen to have installed at the time.
    I think we could use this way also if we want run tests with more than one sdk. If I'm not mistaken it's enough add a list of UseDotNet and after dotnet test command, right?
    On this part I've a question...do we need to use fully qualify path installationPath: $(Agent.ToolsDirectory)/dotnet to be sure to use correct version https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/tool/dotnet-core-tool-installer?view=azure-devops?
  2. Allow to new contributors(and also for our personal use) to clone repo and run a simple build.cmd/./build.sh and let to build workflow takes care of all.
    For instance now we pinned global.json so if someone does not install right sdk will get a build error.

My idea to have a "more complex and less self-describing" CI workflow(a simple batch run on yaml) was to avoid to handle 2 different scenario and have a consistent behaviour inside CI and on local run, in this way we have also less "infra metadata"(script+yaml) to handle

this update does not conflicts with #396 because there we use DOTNET_MULTILEVEL_LOOKUP=0

I mean that we can setup a "mixed CI" with steps:

  1. Install pinned SDK
  2. Build with pinned SDK
  3. Run tests with multiple SDK using custom build.cmd/./build.sh so we have a consistent way between CI and local and it works inside CI because we use fully qualified $dotnetcli

My idea is when in future we'll do some internal changes on instrumentation/tests we dev on pinned SDK but before PR we can tests if that updates works for all expected SDK simply running "local build" build.cmd/./build.sh and again advertise build.cmd/./build.sh in a contrib guide "Build repo"(if we'll have more than one sdk with simple script update we can pass if we want --skipmultiplesdktests and will use pinned one).

Copy link
Collaborator

@MarcoRossignoli MarcoRossignoli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@MarcoRossignoli MarcoRossignoli requested a review from tonerdo May 27, 2019 07:59
@MarcoRossignoli
Copy link
Collaborator

MarcoRossignoli commented May 27, 2019

@AArnott to "test with different SDK" another idea could be add a script under

eng\testsSDKs.ps1 -configuration Release/Debug -sdks 2.2.300 2.2.100
eng/testsSDKs.sh -configuration Release/Debug -sdks 2.2.300 2.2.100

And use in local and "if need" at the end of normal pipelines or as "outerloop" leg.
Do you have other idea to avoid script but have the same feature coherent with local/CI environments?

@tonerdo tonerdo merged commit b8abd2b into coverlet-coverage:master May 27, 2019
@AArnott
Copy link
Contributor Author

AArnott commented May 27, 2019

If I'm not mistaken it's enough add a list of UseDotNet and after dotnet test command, right?

Correct.

On this part I've a question...do we need to use fully qualify path installationPath: $(Agent.ToolsDirectory)/dotnet to be sure to use correct version

No. Just UseDotNet for all the runtimes/SDKs you want to test with. As for how to apply the desired SDK/runtime version for a given test, that depends on what you need and we can discuss that on a dedicated issue on the subject.

For instance now we pinned global.json so if someone does not install right sdk will get a build error.

True. On the other hand, they will never get any of a much more bizarre set of errors that are unpredictable, if they're using an SDK version you don't support and/or have never tried. This "pinning" may cause one step (to install the SDK) on a contributor, but at that point it will always work the way you expect. It also means if you go back and service some particular version later on, it will build exactly like it did back when you originally built it. IMO that's a net win.

run a simple build.cmd/./build.sh and let to build workflow takes care of all.

Yes, that sounds like a good goal.

My idea to have a "more complex and less self-describing" CI workflow(a simple batch run on yaml) was to avoid to handle 2 different scenario and have a consistent behaviour inside CI and on local run, in this way we have also less "infra metadata"(script+yaml) to handle

I've tried to strike a balance between a step-by-step AZP pipeline that is informative to look at, tracks test runs and identifies unstable tests, etc., vs. a reliable way to reproduce all the validations locally with minimal maintenance as well. So I respect the goal there too and am happy to help you reach it (either with guidance or PRs). With this and my other PRs I think we're moving closer to what works well. More on that in a separate issue if you've got one or want to file one on the topic.

And use in local and "if need" at the end of normal pipelines or as "outerloop" leg.
Do you have other idea to avoid script but have the same feature coherent with local/CI environments?

I think that's a reasonable proposal. I'm still doing a bit of guessing as to what you'll be doing in that script so I'd like to learn more (in a dedicated issue) about what you need to say for certain and/or offer refinements.

@AArnott AArnott deleted the PinSdkVersion branch May 27, 2019 13:50
@MarcoRossignoli
Copy link
Collaborator

@AArnott thank's for answers I'll open a separate issue when some of "open PR" will be merged(we've to merge some code PRs on intrumentation tests before try to do multiple tests, we can postpone for now).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants