Skip to content

CI/TST: Making ci/run_tests.sh fail if one of the steps fail #24075

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Dec 4, 2018
Merged

CI/TST: Making ci/run_tests.sh fail if one of the steps fail #24075

merged 3 commits into from
Dec 4, 2018

Conversation

datapythonista
Copy link
Member

@datapythonista datapythonista commented Dec 3, 2018

Looks like when simplifying the running of the tests in the CI (#23924), I missed the -e in the bash header. And that makes the ci/run_tests.sh exit with status code 0, even if the calls to pytests fail.

This left the CI in green, even when tests fail for the last 3 days (sorry about that). I think nothing is broken.

This PR fixes the problem.

CC: @pandas-dev/pandas-core

@datapythonista datapythonista added Testing pandas testing functions or related to the test suite CI Continuous Integration labels Dec 3, 2018
@gfyoung
Copy link
Member

gfyoung commented Dec 3, 2018

Seems reasonable to me. Wondering though if we should double check by adding a failing test just to test that this fails as expected?

@datapythonista
Copy link
Member Author

I tested locally. I'd personally merge this asap, as the test of all PRs are failing silently (unless you check the logs). And we'll see quickly if this works as expected as soon as other PR tests fail.

But I surely can break something here and test if you think it's worth the delay.

@gfyoung
Copy link
Member

gfyoung commented Dec 3, 2018

But I surely can break something here and test if you think it's worth the delay.

Fair enough. I think we all can be vigilant to notice if it doesn't work. 🙂

Copy link
Member

@gfyoung gfyoung left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's merge on green.

@TomAugspurger
Copy link
Contributor

Any idea what the failure is from @datapythonista? https://travis-ci.org/pandas-dev/pandas/jobs/462954286#L1845

$ ci/run_tests.sh
============================= test session starts ==============================
platform linux2 -- Python 2.7.15, pytest-4.0.1, py-1.7.0, pluggy-0.8.0
rootdir: /home/travis/build/pandas-dev/pandas, inifile: setup.cfg
plugins: xdist-1.24.1, forked-0.2, hypothesis-3.68.0
gw0 [0]
scheduling tests via LoadScheduling
 generated xml file: /home/travis/build/pandas-dev/pandas/test-data-single.xml -
======================== no tests ran in 84.95 seconds =========================
The command "ci/run_tests.sh" exited with 5.

Both of those failures are the slow / single tests, right?

@datapythonista
Copy link
Member Author

Looks like when no tests are found with a -m pattern (the case for single and slow), the pytest process exits with status code 5. I'm checking what's the best option to fix it.

@TomAugspurger
Copy link
Contributor

TomAugspurger commented Dec 3, 2018

I see. Do we ever want to run single and slow though? I would have thought that it'd be single or slow.

edit: no that's not quite right... sorry.

@TomAugspurger
Copy link
Contributor

So to run all the SLOW jobs, we need

-n 1 -m slow and single

and then

-n 2 -m slow and not single

And the problem is that -m slow and single is empty? As a workaround, can we add a dummy test with both those marks?

@codecov
Copy link

codecov bot commented Dec 3, 2018

Codecov Report

Merging #24075 into master will increase coverage by 49.86%.
The diff coverage is n/a.

Impacted file tree graph

@@             Coverage Diff             @@
##           master   #24075       +/-   ##
===========================================
+ Coverage   42.38%   92.25%   +49.86%     
===========================================
  Files         161      161               
  Lines       51701    51701               
===========================================
+ Hits        21914    47696    +25782     
+ Misses      29787     4005    -25782
Flag Coverage Δ
#multiple 90.65% <ø> (?)
#single 42.38% <ø> (ø) ⬆️
Impacted Files Coverage Δ
pandas/core/computation/pytables.py 92.37% <0%> (+0.3%) ⬆️
pandas/io/pytables.py 92.3% <0%> (+0.92%) ⬆️
pandas/util/_test_decorators.py 93.24% <0%> (+4.05%) ⬆️
pandas/compat/__init__.py 58.36% <0%> (+8.17%) ⬆️
pandas/core/config_init.py 99.24% <0%> (+9.84%) ⬆️
pandas/core/reshape/util.py 100% <0%> (+11.53%) ⬆️
pandas/compat/numpy/__init__.py 92.85% <0%> (+14.28%) ⬆️
pandas/core/computation/common.py 85.71% <0%> (+14.28%) ⬆️
pandas/core/api.py 100% <0%> (+14.81%) ⬆️
pandas/core/indexes/api.py 99% <0%> (+14.85%) ⬆️
... and 119 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 08395af...0a67310. Read the comment docs.

@codecov
Copy link

codecov bot commented Dec 3, 2018

Codecov Report

Merging #24075 into master will increase coverage by 49.7%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #24075      +/-   ##
==========================================
+ Coverage   42.38%   92.09%   +49.7%     
==========================================
  Files         161      161              
  Lines       51701    52102     +401     
==========================================
+ Hits        21914    47983   +26069     
+ Misses      29787     4119   -25668
Flag Coverage Δ
#multiple 90.66% <ø> (?)
#single 42.81% <ø> (+0.43%) ⬆️
Impacted Files Coverage Δ
pandas/core/computation/pytables.py 92.37% <0%> (+0.3%) ⬆️
pandas/io/pytables.py 92.3% <0%> (+0.92%) ⬆️
pandas/util/_test_decorators.py 93.24% <0%> (+4.05%) ⬆️
pandas/compat/__init__.py 58.36% <0%> (+8.17%) ⬆️
pandas/core/config_init.py 99.24% <0%> (+9.84%) ⬆️
pandas/core/reshape/util.py 100% <0%> (+11.53%) ⬆️
pandas/core/dtypes/dtypes.py 89.82% <0%> (+13.66%) ⬆️
pandas/compat/numpy/__init__.py 92.85% <0%> (+14.28%) ⬆️
pandas/core/computation/common.py 85.71% <0%> (+14.28%) ⬆️
pandas/core/api.py 100% <0%> (+14.81%) ⬆️
... and 119 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 08395af...07375c7. Read the comment docs.

@datapythonista
Copy link
Member Author

That's correct @TomAugspurger, looks like it's a common pytest issue. The pytest devs propose the workaround I pushed. Not the nicest code, but this should work for now, and I'll try to find a cleaner approach for that script later (I don't like calling pytest twice, will try to avoid that).

@TomAugspurger
Copy link
Contributor

TomAugspurger commented Dec 3, 2018 via email

@datapythonista
Copy link
Member Author

Can't really understand the errors in travis. They are surely unrelated, they don't seem to happen in master, and I can't reproduce them locally. @TomAugspurger have you seen this error before?

_____________________ TestTSPlot.test_is_error_nozeroindex _____________________
[gw0] linux -- Python 3.6.6 /home/travis/miniconda3/envs/pandas-dev/bin/python
self = <pandas.tests.plotting.test_datetimelike.TestTSPlot object at 0x7fca6c1875c0>
    def test_is_error_nozeroindex(self):
        # GH11858
        i = np.array([1, 2, 3])
        a = DataFrame(i, index=i)
>       _check_plot_works(a.plot, xerr=a)
pandas/tests/plotting/test_datetimelike.py:89: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
pandas/tests/plotting/test_datetimelike.py:1562: in _check_plot_works
    pickle.dump(fig, path)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
m = <bound method AffineBase.transform of <matplotlib.transforms.BlendedAffine2D object at 0x7fca6c567240>>
    def _pickle_method(m):
        """Handle pickling issues with class instance."""
>       if m.im_self is None:
E       AttributeError: 'function' object has no attribute 'im_self'
../../../miniconda3/envs/pandas-dev/lib/python3.6/site-packages/statsmodels/graphics/functional.py:32: AttributeError

@h-vetinari
Copy link
Contributor

I've seen them locally sometimes, but never figured out why. Seems like a weird matplotlib dependency on something.

@datapythonista
Copy link
Member Author

That's weird, not sure why matplotlib or statmodels should fail randomly.

In any case, all green now. I'll let someone else review it before merging, as I made some changes since last reviews.

@jreback jreback added this to the 0.24.0 milestone Dec 4, 2018
pytest -m "$TYPE_PATTERN$PATTERN" -n $NUM_JOBS -s --strict --durations=10 --junitxml=test-data-$TYPE.xml $TEST_ARGS $COVERAGE pandas
PYTEST_CMD="pytest -m \"$TYPE_PATTERN$PATTERN\" -n $NUM_JOBS -s --strict --durations=10 --junitxml=test-data-$TYPE.xml $TEST_ARGS $COVERAGE pandas"
echo $PYTEST_CMD
# if no tests are found (the case of "single and slow"), pytest exits with code 5, and would make the script fail, if not for the below code
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if no tests are found it is an error

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't have tests for the builds when we filter by "slow", when we execute the "single" call here. So, all our slow builds will fail.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok thanks
even more explicit comments here would help then

but can be more another pass

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll try to simplify all this to a single call with the --dist option, and if that works, this or the loop won't be needed, and things will be very simple here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure

@jreback jreback merged commit 00f5aba into pandas-dev:master Dec 4, 2018
Pingviinituutti pushed a commit to Pingviinituutti/pandas that referenced this pull request Feb 28, 2019
…dev#24075)

* Making ci/run_tests.sh fail if one of the steps fail

* Fixing the pytest error when no tests are found for single and slow

* Fixing error when uploading the coverage
Pingviinituutti pushed a commit to Pingviinituutti/pandas that referenced this pull request Feb 28, 2019
…dev#24075)

* Making ci/run_tests.sh fail if one of the steps fail

* Fixing the pytest error when no tests are found for single and slow

* Fixing error when uploading the coverage
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI Continuous Integration Testing pandas testing functions or related to the test suite
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants