-
-
Notifications
You must be signed in to change notification settings - Fork 18.5k
CI/TST: Making ci/run_tests.sh fail if one of the steps fail #24075
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Seems reasonable to me. Wondering though if we should double check by adding a failing test just to test that this fails as expected? |
I tested locally. I'd personally merge this asap, as the test of all PRs are failing silently (unless you check the logs). And we'll see quickly if this works as expected as soon as other PR tests fail. But I surely can break something here and test if you think it's worth the delay. |
Fair enough. I think we all can be vigilant to notice if it doesn't work. 🙂 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's merge on green.
Any idea what the failure is from @datapythonista? https://travis-ci.org/pandas-dev/pandas/jobs/462954286#L1845
Both of those failures are the slow / single tests, right? |
Looks like when no tests are found with a |
I see. Do we ever want to run edit: no that's not quite right... sorry. |
So to run all the SLOW jobs, we need
and then
And the problem is that |
Codecov Report
@@ Coverage Diff @@
## master #24075 +/- ##
===========================================
+ Coverage 42.38% 92.25% +49.86%
===========================================
Files 161 161
Lines 51701 51701
===========================================
+ Hits 21914 47696 +25782
+ Misses 29787 4005 -25782
Continue to review full report at Codecov.
|
Codecov Report
@@ Coverage Diff @@
## master #24075 +/- ##
==========================================
+ Coverage 42.38% 92.09% +49.7%
==========================================
Files 161 161
Lines 51701 52102 +401
==========================================
+ Hits 21914 47983 +26069
+ Misses 29787 4119 -25668
Continue to review full report at Codecov.
|
That's correct @TomAugspurger, looks like it's a common pytest issue. The pytest devs propose the workaround I pushed. Not the nicest code, but this should work for now, and I'll try to find a cleaner approach for that script later (I don't like calling pytest twice, will try to avoid that). |
LGTM.
…On Mon, Dec 3, 2018 at 3:44 PM Marc Garcia ***@***.***> wrote:
That's correct @TomAugspurger <https://github.com/TomAugspurger>, looks
like it's a common pytest issue. The pytest devs propose the workaround I
pushed. Not the nicest code, but this should work for now, and I'll try to
find a cleaner approach for that script later (I don't like calling pytest
twice, will try to avoid that).
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#24075 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABQHIh6DxgCCo7bAnCFs4m-IAOKvVRJMks5u1ZtSgaJpZM4Y_L32>
.
|
Can't really understand the errors in travis. They are surely unrelated, they don't seem to happen in master, and I can't reproduce them locally. @TomAugspurger have you seen this error before?
|
I've seen them locally sometimes, but never figured out why. Seems like a weird matplotlib dependency on something. |
That's weird, not sure why matplotlib or statmodels should fail randomly. In any case, all green now. I'll let someone else review it before merging, as I made some changes since last reviews. |
pytest -m "$TYPE_PATTERN$PATTERN" -n $NUM_JOBS -s --strict --durations=10 --junitxml=test-data-$TYPE.xml $TEST_ARGS $COVERAGE pandas | ||
PYTEST_CMD="pytest -m \"$TYPE_PATTERN$PATTERN\" -n $NUM_JOBS -s --strict --durations=10 --junitxml=test-data-$TYPE.xml $TEST_ARGS $COVERAGE pandas" | ||
echo $PYTEST_CMD | ||
# if no tests are found (the case of "single and slow"), pytest exits with code 5, and would make the script fail, if not for the below code |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if no tests are found it is an error
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't have tests for the builds when we filter by "slow", when we execute the "single" call here. So, all our slow builds will fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok thanks
even more explicit comments here would help then
but can be more another pass
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll try to simplify all this to a single call with the --dist
option, and if that works, this or the loop won't be needed, and things will be very simple here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure
…dev#24075) * Making ci/run_tests.sh fail if one of the steps fail * Fixing the pytest error when no tests are found for single and slow * Fixing error when uploading the coverage
…dev#24075) * Making ci/run_tests.sh fail if one of the steps fail * Fixing the pytest error when no tests are found for single and slow * Fixing error when uploading the coverage
Looks like when simplifying the running of the tests in the CI (#23924), I missed the
-e
in the bash header. And that makes theci/run_tests.sh
exit with status code 0, even if the calls to pytests fail.This left the CI in green, even when tests fail for the last 3 days (sorry about that). I think nothing is broken.
This PR fixes the problem.
CC: @pandas-dev/pandas-core