Skip to content

pytest-warnings fails tests since 2.8 #1045

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jonapich opened this issue Sep 21, 2015 · 10 comments
Closed

pytest-warnings fails tests since 2.8 #1045

jonapich opened this issue Sep 21, 2015 · 10 comments

Comments

@jonapich
Copy link

Something changed in 2.8 that makes all my test runs fail with "2 pytest-warnings". I had to use a simplefilter to even be able to view what the warnings were about.

One of the failure is a simple deprecationwarning that I can live with for the time being. It's not clear how we're supposed to disable this functionality - tried all possible warnings.simplefilter values and the only useful one seems to be "error" (that's how I could see what the warning was all about).

Another warning is now seen as a pytest-warnings that fails my test. It's an ImportWarning because of a duplicate folder name (one with init, one without). Python correctly resolves the init one.

I'll be pinning 2.7.3 until this is fixed.

@nicoddemus
Copy link
Member

Something changed in 2.8 that makes all my test runs fail with "2 pytest-warnings"

You mean the test suite is failing (returning != 0 to the system)? This shouldn't happen and is certainly a bug if it is.

pytest-warnings are warnings emitted by py.test to warn about causes related to the test suite itself, not your code (for example, pytest won't collect a class with name starting as Test with an __init__ method). They have nothing to do with the warnings module, and this was confusing to users. They were previously reported as warnings at the end of the test suite, but in #976 were renamed to pytest-warnings exactly to avoid this confusion.

One of the failure is a simple deprecationwarning that I can live with for the time being. It's not clear how we're supposed to disable this functionality - tried all possible warnings.simplefilter values and the only useful one seems to be "error" (that's how I could see what the warning was all about).

I get the feeling that you are talking about about the warnings module here... but you can't really currently stopping from seeing the line 2 pytest-warnings at the end of summary. But as I said, they should not be failing your test suite. You can get more information about them by passing -rx, and they are a problem and should be fixed, usually. Either way, could you post more information please?

Another warning is now seen as a pytest-warnings that fails my test. It's an ImportWarning because of a duplicate folder name (one with init, one without). Python correctly resolves the init one.

Can you give more information on this, traceback, running with -rx etc? Thanks.

@jonapich
Copy link
Author

My bad, I just checked the exit codes and they're fine (win/linux). My specific problem was actually caused by the exit code being 5 when no tests are found. Coincidentally, it was the first time I saw the pytest-warnings message and assumed it was failing the exit code check.

Adding the -rx switch doesn't show any additional information to the run, although pytest still mentions the 2 pytest-warnings at the end of the run...

E:\project>py.test -rx -kblade_servant
============================= test session starts =============================
platform win32 -- Python 2.7.9, pytest-2.8.0, py-1.4.30, pluggy-0.3.1
rootdir: E:\project, inifile: pytest.ini
plugins: timeout-0.5
collected 171 items

FrameworkTests\Sandbox\test_blade_servant.py .

== 170 tests deselected by "-kblade_servant -m 'not (cloud or interactive)'" ==
========= 1 passed, 170 deselected, 2 pytest-warnings in 0.36 seconds =========

E:\project>

@nicoddemus
Copy link
Member

Oh sorry, I meant -rw (as in "warning"), my bad! 😅

@nicoddemus
Copy link
Member

My specific problem was actually caused by the exit code being 5 when no tests are found.

Oh I see. There was a debate over this, but the vast majority of people voted on having this behavior, as many felt they should get a failure if no tests were even collected (note that it is OK to collect tests but skip them all though I got that last part wrong, sorry 😅).

@jonapich
Copy link
Author

It's a nice behavior ;) Although. if it's returning anything else than 0, it should probably state it clearly a the end of the run?

Thanks for the -rw trick, wish that was easier to google.

=========================== pytest-warning summary ============================
WI1 c:\python27\lib\site-packages\pytest_timeout.py:68 'pytest_runtest_protocol' hook uses deprecated __multicall__ argument
WI1 E:\Projects\Repos\TestAutomation\pyces\conftest.py:312 'pytest_runtest_makereport' hook uses deprecated __multicall__ argument

@nicoddemus
Copy link
Member

No problem, I'm closing this then. 😄

@nitishr
Copy link

nitishr commented Sep 25, 2015

The non-zero exit code is causing pytest-watch to indicate spurious failure when used with pytest-testmon or pytest-incremental. When either of those tools determines there're no tests to be run, pytest-watch reports a failure.

@nicoddemus
Copy link
Member

@nitishr
By "non-zero" you mean 5 for "no tests executed"? Perhaps those tools should be updated to know about the new status code and act accordingly?

@mdengler
Copy link

I'm glad there is a non-zero exitcode when no tests are collected -- exiting zero never seemed right to me -- but I'm seeing a exitcode of 5 even when tests have been collected. Should I open a new issue, or is this related to a non-zero number of tests being skipped?

$ py.test [...]
=================================================================================== 306 passed, 17 skipped in 310.65 seconds ====================================================================================
$ echo $?
5

@nicoddemus
Copy link
Member

@mdengler it should return zero if tests were collected, regardless if they were skipped or not:

import pytest
@pytest.mark.skipif('True')
def test_foo():
    assert 0
>py.test test_foo.py
============================= test session starts =============================
platform win32 -- Python 3.5.0, pytest-2.8.2.dev1, py-1.4.30, pluggy-0.3.1
rootdir: X:\pytest, inifile: tox.ini
collected 1 items

test_foo.py s
=========================== short test summary info ===========================
SKIP [1] test_foo.py:1: condition: True

========================== 1 skipped in 0.01 seconds ==========================

> echo %?
0

If you are getting something else, please open a new issue. 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants