-
Notifications
You must be signed in to change notification settings - Fork 389
Coverage causes hangs #511
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi Oren, I did a debug with msbuild but nothing different core is the same
With
@jkotas do you think that aggressive inlining could help here(I think that tiering cannot help to remove the call) |
I think there may be other things going too as it's sometimes affectint the test itself:
Not every run, but more than once. If you run the tests several times, it'll eventually fail with that. Or this
|
Those errors shouldn't be happening based on those code paths, it seems like Coverlet may be interfering with the normal code flow. |
You mean more than one time in same CI loop(to undestand if could be related to instrumenting more than once problem)? |
I mean running it from the command line different times. It's non deterministic, and only happens with Coverlet. Has never happened without. |
Can you try to "cleanup"(git -fdx) between tests and check if it happens? |
Will try. In the interim, even with the |
Hmm....running twice without a clean does appear to trigger the errors. Seems like it's over-instrumenting or breaking something that way? |
IIRW we had some issue where libs were instrumented more than once leading to weird results. |
This is small-enough method to be inlined by JIT without any hints. I do not expect that aggressive inlining would help anything here. |
At the moment we don't check if module is already instrumented(maybe we could try to check if module tracker is present) |
Spoke too soon, even after a clean, I got an error:
|
Tell me where you put |
I tried it on ECAlgorithmsTest.cs on the |
Looks like it takes 8.2 minutes on my machine to complete (with SingleHit = true and no ExcludeFromCodeCoverage attributes set). Still a significant issue given that it takes 1.5 minutes without any coverage. |
I'll try to understand what's going on, thank's for quick infos! |
Perf isn't the only issue. I'm concerned about the random failures it injects. Locally, after 8.3 min, it passed one platform but something failed on another. Started with a clean repo. Same thing on Azure Pipelines happened to pass completely this time. https://dev.azure.com/onovotny/GitBuilds/_build/results?buildId=2207&view=results |
I'll try to repro asap |
In CmsTestUtil, the property getters call Init() on the objects they create, so that by the time GenerateKeyPair() is called, the local |
Oren can you try to add |
I'll try to reproduce and also compare instrumented vs non instrumented IL It's first time I see an issue like that, maybe instrumentation change execution order that generates some bug (race etc...) |
@onovotny I did a lot of test on 2 different machine yesterday evening 4/5 time x machine, and no luck...I cannot repro, my command was
runsettings <?xml version="1.0" encoding="utf-8"?>
<!-- File name extension must be .runsettings -->
<RunSettings>
<DataCollectionRunSettings>
<DataCollectors>
<DataCollector friendlyName="XPlat code coverage">
<Configuration>
<Format>cobertura</Format>
<Exclude>[xunit.*]*,[*Tests]*,[nunit.*]*,[NUnit3.*]*</Exclude> <!-- [Assembly-Filter]Type-Filter -->
<ExcludeByAttribute>Obsolete,GeneratedCodeAttribute,CompilerGeneratedAttribute</ExcludeByAttribute>
<SingleHit>true</SingleHit>
</Configuration>
</DataCollector>
</DataCollectors>
</DataCollectionRunSettings>
</RunSettings> Can you check that my config it's the same as yours? This is the instrumentation of I've one suspect for #511 (comment) ...that for some reason tests are running in parallel and two test ask for static access to https://github.com/onovotny/bc-csharp/blob/netstandard/crypto/test/src/cms/test/CMSTestUtil.cs#L84-L91 and the first init kpg so null check above return non initilized instance for another thread that could lead to null ref. Maybe could happen because instrumented code are slower than non instrumented or is no more inlined so the randomic nature of failure. |
This issue is outdated (before coverlet version 6.0.2) and not related to #1192 regression. |
Hi,
When adding coverage for .NET Framework, it's causing hangs. It works fine on .NET Core:
https://dev.azure.com/onovotny/GitBuilds/_build/results?buildId=2203&view=logs&j=12f1170f-54f2-53f3-20dd-22fc7dff55f9
This commit has the changes:
novotnyllc/bc-csharp@5ddb175
Here's the command that fails, when on .NET Framework: novotnyllc/bc-csharp@5ddb175
Seems like it's happening with .NET Core too:
https://dev.azure.com/onovotny/GitBuilds/_build/results?buildId=2204&view=logs&j=12f1170f-54f2-53f3-20dd-22fc7dff55f9&t=bfbec40a-1b5e-5690-b870-859627cad0c0&l=20
Should be able to repro with
dotnet test -f netcoreapp2.1 ...
Repro steps:
Clone:
https://github.com/onovotny/bc-csharp
Checkout commit:
4a401ed22df13ef61b2bb7a306ee8a083167f1c6
Go to
crypto\test
Run
dotnet test -f netcoreapp2.1
and see that it passes. About 1.5 min on my machineRun
dotnet test -f netcoreapp2.1 -s ..\..\CodeCoverage.runsettings
Tests are taking much longer to run (never complete) and fail in some cases.
I think it's struggling on this test: https://github.com/onovotny/bc-csharp/blob/netstandard/crypto/test/src/math/ec/test/ECPointPerformanceTest.cs#L176
Perhaps with some of the random number generators in use there?
The text was updated successfully, but these errors were encountered: