Feedback about Test Analytics from Codecov β π #637
Replies: 17 comments 22 replies
-
Thanks for the new thread :)
TBH I don't much like automated PR comments. I'd prefer to have that in a job summary. For the coverage reports, one can already disable the comments. |
Beta Was this translation helpful? Give feedback.
-
Bringing back my comments that also got lost in the massive thread: In summary, I like the feature overall. It was easy to set up. However, I would prefer for the posted PR comment including flaky test details to preserve the original test results summary as well. Both coverage results and flaky test information are relevant simultaneously. Currently, if a flaky test does happen to result in a failure, the coverage of all other tests are overridden in the comment, which forces me to go through the CodeCov UI and dig back the information. Some tests might be temporarily accepted although flaky/failing, while being interested about covering other aspects of the code. At this time, I have to disable the flaky test analysis because it hinders our development of other features while some known flaky tests are happening, and this feature masks away coverage results in PR comments. |
Beta Was this translation helpful? Give feedback.
-
Hello, our main use case is finding and fixing flaky tests. We switched from Datadog CI because it was too expensive and we didn't need most of their features. It's working ok for us with Codecov, but we need a view with all branches to be able to prioritize which tests to fix. We have dozens of open PRs where CI is running, and looking only at our main branch for flakyness doesn't give us much data to work with (there are many more CI runs on feature branches than on the main branch). Happy to discuss it in more details if needed. |
Beta Was this translation helpful? Give feedback.
-
Hi! It would be great if there was an option on the regular Action to enable test analytics, since most people are already using that. There is no need to add another action just to upload a single file. Thanks! |
Beta Was this translation helpful? Give feedback.
-
I would like to dynamically pull data about test flakiness in our main branch to skip/ignore failures from tests with high failure rates. The test results api is supposed to return this data, but it lacks some critical features that would be useful for this case, like filtering by flag, filtering to be above a certain failure rate, and also the return value from my testing doesn't even show the failure rate, even though that's listed as a required output in the docs. Also, the docs say that we can use the test results API to use custom reports, but it just links to the two test results get requests, which doesn't explain how to upload custom reports. I'm also curious if we are able to use an access token to call the graphql endpoints that are called when viewing dashboards. Those seems to return all the data I need, with options for sorting and filtering by flag, so that would work relatively well for my use case. |
Beta Was this translation helpful? Give feedback.
-
My junit test files are generated using
The files generated are:
The structure of these files are slightly different compared to the example in the Codecov docs. <?xml version="1.0" encoding="utf-8"?>
<testsuites>
<testsuite errors="0" failures="0" hostname="4e5c6d2bc9e0" name="pytest" skipped="0" tests="30"
time="1.239" timestamp="2025-03-23T21:24:29.007588">
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="20"
name="test_is_valid_version[3.78.0-1-True]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="20"
name="test_is_valid_version[3.78-01-False]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="20"
name="test_is_valid_version[3.78.0-01-True]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="20"
name="test_is_valid_version[3.0.0-01-True]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="20"
name="test_is_valid_version[3.78.1-02-True]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="20"
name="test_is_valid_version[invalid-False]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="20"
name="test_is_valid_version[3.78.0.1-01-False]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="20"
name="test_is_valid_version[3.78.0-False]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="20"
name="test_is_valid_version[123-False]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="20"
name="test_is_valid_version[3.78-False]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="20"
name="test_is_valid_version[None-False]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="20"
name="test_is_valid_version[-False]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="20"
name="test_is_valid_version[3.78.0-01-False]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="20"
name="test_is_valid_version[version12-False]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="20"
name="test_is_valid_version[version13-False]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="20"
name="test_is_valid_version[True-False]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="232"
name="test_get_possible_package_names[3.78.0-01-None-None-expected0]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="232"
name="test_get_possible_package_names[3.78.0-01-aarch64-None-expected1]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download.TestNexusDownloadModule"
file="tests/unit/plugins/modules/test_download.py" line="165"
name="test_main_check_mode" time="0.007" />
<testcase classname="tests.unit.plugins.modules.test_download.TestNexusDownloadModule"
file="tests/unit/plugins/modules/test_download.py" line="68"
name="test_get_latest_version" time="0.013" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="232"
name="test_get_possible_package_names[3.78.0-01-aarch64-java11-expected3]" time="0.005" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="318"
name="test_get_valid_download_urls" time="0.006" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="284"
name="test_validate_download_url" time="0.005" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="232"
name="test_get_possible_package_names[3.78.0-01-None-java11-expected2]" time="0.004" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="665" name="test_get_dest_path"
time="0.003" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="350" name="test_main"
time="0.008" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="586" name="test_download_file"
time="0.010" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="674" name="test_url_resolution"
time="0.006" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="707" name="test_error_handling"
time="0.005" />
<testcase classname="tests.unit.plugins.modules.test_download"
file="tests/unit/plugins/modules/test_download.py" line="742"
name="test_get_download_url" time="0.005" />
</testsuite>
</testsuites> |
Beta Was this translation helpful? Give feedback.
-
First, thank you for this interesting feature. Maybe this question has already been raised/answered in the original (massive) thread, but I could not find the answer. When running multiple times the same test suite for various |
Beta Was this translation helpful? Give feedback.
-
I just set this up on our repository. Configuration was great, but one problem I'm facing is that reports are not merged in the bot comment. We operate in a monorepo, and if a developer modifies more than one of our apps there may be multiple test failures being reported from separate jobs on the same pull request. The Codecov bot comment is updated with whichever report was uploaded last, hiding the results of all prior reports. edit: my mistake, the reports are merged. My test PR had identically named files and test names for testing out the integration. Giving the tests a unique name is now showing the merged reports on the comment. |
Beta Was this translation helpful? Give feedback.
-
The feature is nice! One thing I noticed is that it doesnβt strip out ANSI codes from test output, so I see stuff like #x1B[1m#x1B[31mtests/test_preprocessing.py#x1B[0m:445:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
#x1B[1m#x1B[.../hostedtoolcache/Python/3.13.2....../x64/lib/python3.13.../site-packages/legacy_api_wrap/__init__.py#x1B[0m:82: in fn_compatible
return fn(*args_all, **kw)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |
Beta Was this translation helpful? Give feedback.
-
Just tried this out for a .NET project, and noticed that quotes in data-driven test names (e.g. for strings) are rendered as escaped:
I would expect to see this in the UI:
|
Beta Was this translation helpful? Give feedback.
-
Hi, can I disable the stack trace logging into GitHub comments? Unfortunately, one of our libs gets inspected when it crashes and leaks an API token. And GitHub Security Scan is complaining about it. |
Beta Was this translation helpful? Give feedback.
-
Hello. is it possible to generate the requested JUnit format for .NET tests? |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
Tttbbb |
Beta Was this translation helpful? Give feedback.
-
Please support GoogleTest's xml format. They do not support Junit, but GoogleTest is widely used in native projects. |
Beta Was this translation helpful? Give feedback.
-
I am able to successfully upload my playwright junit report from yaml but nothing is reflecting in test analytics tab, how to debug? |
Beta Was this translation helpful? Give feedback.
-
My team just set this up in a private repo, and our Jest and Pytest junit reports are definitely uploading correctly because we can see them if we change branch contexts, but we are unable to see any data in the dashboard for the default context/main branch ( |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Thanks for dropping by! π
We've recently released a whole new feature around Test analytics and flaky test detection .
We'd love to hear feedback on
How your setup experience was.
How easy/useful the PR comment is
How actionable and accurate our flake reporting is
How useful our Tests Dashboards are in finding and fixing problematic tests in your test suite
This issue is intended to share and collect feedback about the tool. If you have support needs or questions, please let us know!
Beta Was this translation helpful? Give feedback.
All reactions