Skip to content

Consecutive pytest.mark.skipif inside pytest.mark.parameterize #423

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
pytestbot opened this issue Jan 16, 2014 · 10 comments
Closed

Consecutive pytest.mark.skipif inside pytest.mark.parameterize #423

pytestbot opened this issue Jan 16, 2014 · 10 comments
Labels
type: bug problem that needs to be addressed

Comments

@pytestbot
Copy link
Contributor

Originally reported by: Ldiary Translations (BitBucket: ldiary, GitHub: ldiary)


Given the test module below which was derived from the documentation example:
http://pytest.org/latest/parametrize.html#pytest-mark-parametrize-parametrizing-test-functions

#!python

import pytest

flag = []


@pytest.mark.default
@pytest.mark.parametrize("my_input,expected", [
    ("3+5", 8),
    ("2+4", 6),
    ("6*9", 42),
])
def test_default_example(my_input, expected):
    assert eval(my_input) == expected


@pytest.mark.skipsecond
@pytest.mark.parametrize("my_input,expected", [
    ("3+5", "Failed"),
    pytest.mark.skipif("flag == []", ("2+4", 6)),
    ("6*9", 42),
])
def test_skip_second(my_input, expected):
    print("\nBefore clearing flag: ", flag)
    while len(flag) > 0:
        flag.pop()
    print("After clearing flag: ", flag)
    assert eval(my_input) == expected
    flag.append("Previous Test Passed")


@pytest.mark.skipthird
@pytest.mark.parametrize("my_input,expected", [
    ("3+5", 8),
     ("2+4", "Failed"),
    pytest.mark.skipif("flag == []", ("6*9", 42)),
])
def test_skip_third(my_input, expected):
    print("\nBefore clearing flag: ", flag)
    while len(flag) > 0:
        flag.pop()
    print("After clearing flag: ", flag)
    assert eval(my_input) == expected
    flag.append("Previous Test Passed")


@pytest.mark.skipsecondthird
@pytest.mark.parametrize("my_input,expected", [
    ("3+5", 8),
    pytest.mark.skipif("flag == []", ("2+4", "Failed")),
    pytest.mark.skipif("flag == []", ("6*9", 42)),
])
def test_skip_second_third(my_input, expected):
    print("\nBefore clearing flag: ", flag)
    while len(flag) > 0:
        flag.pop()
    print("After clearing flag: ", flag)
    assert eval(my_input) == expected
    flag.append("Previous Test Passed")

The test marks "default", "skipsecond" and "skipthird" works as expected, but the test mark "skipsecondthird" has strange behaviour.

We can't understand why when the "skipsecondthird" test mark is executed like below, the last test was still executed three times. We are expecting that the first test execution will be PASSED, the second test execution will FAILED, and the third execution will be SKIPPED; but it wasn't.

Are consecutive skipif inside parameterize supported on the first place?
Or is this a bug?

#!python


$ py.test -svm skipsecondthird 

========== test session starts =============================

platform linux -- Python 3.3.3 -- pytest-2.5.0 -- /home/ldiary/py3env/bin/python3

SecondSkipInParameterized.py:47: test_skip_second_third[3+5-8] 

Before clearing flag:  []

After clearing flag:  []

PASSED

SecondSkipInParameterized.py:47: test_skip_second_third[2+4-Failed] 

Before clearing flag:  ['Previous Test Passed']

After clearing flag:  []

FAILED

SecondSkipInParameterized.py:47: test_skip_second_third[6*9-42] 

Before clearing flag:  []

After clearing flag:  []

FAILED

=========== 2 failed, 1 passed, 118 deselected in 0.37 seconds===============

@pytestbot
Copy link
Contributor Author

Original comment by Brianna Laugher (BitBucket: pfctdayelise, GitHub: pfctdayelise):


Hm, I can't reproduce your results. I copied this into a test file with the latest py.test and got these results:

#!shell

(pytest)blaugher@scorpion:~/workspace/pytest$ py.test --version
This is pytest version 2.5.2.dev1, imported from /home/blaugher/.virtualenvs/pytest/local/lib/python2.7/site-packages/pytest-2.5.2.dev1-py2.7.egg/pytest.pyc

(pytest)blaugher@scorpion:~/workspace/pytest$ py.test test_parametrizeskipif.py --tb=short -vsm skipthird
============================================== test session starts ===============================================
platform linux2 -- Python 2.7.3 -- pytest-2.5.2.dev1 -- /home/blaugher/.virtualenvs/pytest/bin/python
collected 12 items 

test_parametrizeskipif.py:31: test_skip_third[3+5-8] ('\nBefore clearing flag: ', [])
('After clearing flag: ', [])
PASSED
test_parametrizeskipif.py:31: test_skip_third[2+4-Failed] ('\nBefore clearing flag: ', ['Previous Test Passed'])
('After clearing flag: ', [])
FAILED
test_parametrizeskipif.py:31: test_skip_third[6*9-42] SKIPPED

==================================================== FAILURES ====================================================
__________________________________________ test_skip_third[2+4-Failed] ___________________________________________
test_parametrizeskipif.py:42: in test_skip_third
>       assert eval(my_input) == expected
E       assert 6 == 'Failed'
E        +  where 6 = eval('2+4')
============================================ short test summary info =============================================
SKIP [1] /home/blaugher/.virtualenvs/pytest/local/lib/python2.7/site-packages/pytest-2.5.2.dev1-py2.7.egg/_pytest/skipping.py:132: condition: flag == []
===================================== 9 tests deselected by "-m 'skipthird'" =====================================
========================== 1 failed, 1 passed, 1 skipped, 9 deselected in 0.01 seconds ========================

I note you are running Python 3, I am running 2.7, but I don't know why that would make a difference... shrug Or perhaps something in pytest http://pytest.org/latest/changelog.html#id1 has made an improvement?

I would note that constructing tests like this is going to be a little brittle, especially for running them distributed, and even with some selections of -k and -m may have unexpected results.

I am going to try installing python 3 and see what results I get there.

@pytestbot
Copy link
Contributor Author

Original comment by Brianna Laugher (BitBucket: pfctdayelise, GitHub: pfctdayelise):


I made a new virtualenv with python3.3 and installed pytest from pip (2.5.1). I still get the same results - PASS, FAIL, SKIP. So either it is a change between python 3.3.0 and 3.3.3, a change between pytest 2.5.0 and 2.5.1, or you have some local strangeness happening. :(

#!shell

(pytestWpy33)blaugher@scorpion:~/workspace/pytest$ py.test test_parametrizeskipif.py --tb=short -vsm skipthird
============================================== test session starts ===============================================
platform linux -- Python 3.3.0 -- pytest-2.5.1 -- /home/blaugher/.virtualenvs/pytestWpy33/bin/python3.3
collected 12 items 

test_parametrizeskipif.py:31: test_skip_third[3+5-8] 
Before clearing flag:  []
After clearing flag:  []
PASSED
test_parametrizeskipif.py:31: test_skip_third[2+4-Failed] 
Before clearing flag:  ['Previous Test Passed']
After clearing flag:  []
FAILED
test_parametrizeskipif.py:31: test_skip_third[6*9-42] SKIPPED

==================================================== FAILURES ====================================================
__________________________________________ test_skip_third[2+4-Failed] ___________________________________________
test_parametrizeskipif.py:42: in test_skip_third
>       assert eval(my_input) == expected
E       assert 6 == 'Failed'
E        +  where 6 = eval('2+4')
============================================ short test summary info =============================================
SKIP [1] /home/blaugher/.virtualenvs/pytestWpy33/lib/python3.3/site-packages/_pytest/skipping.py:132: condition: flag == []
===================================== 9 tests deselected by "-m 'skipthird'" =====================================
========================== 1 failed, 1 passed, 1 skipped, 9 deselected in 0.02 seconds ========================

@pytestbot
Copy link
Contributor Author

Original comment by Ldiary Translations (BitBucket: ldiary, GitHub: ldiary):


Thanks a lot Brianna for the follow-up, have you tried running the "skipsecondthird" mark? Because what I notice is your running the "skipthird" which also runs fine in my machine.

Our tests needs to be able to run like the "skipsecondthird" though, because if one of the parameters fails, we certainly know that our system is broken and we don't want to run the remaining tests for the parameterized value. So we need to have consecutive skipifs inside parameterized.

@pytestbot
Copy link
Contributor Author

Original comment by Brianna Laugher (BitBucket: pfctdayelise, GitHub: pfctdayelise):


Argh, I'm sorry. That was a reading comprehension fail.

I also get the results PASS FAIL FAIL for "skipsecondthird".

I just tried "unrolling" the tests, having three separate test functions with skipif statements on the last two. Same results. Which indicates it is a problem with skipif in general rather than the fact it is being used inside parametrize.

However, I found using an imperative skip gives the results you are after:

#!python


flag = ['initially ok']

@pytest.mark.internalskip
@pytest.mark.parametrize("my_input,expected", [
    ("3+5", 8),
    ("2+4", "Failed"),
    ("6*9", 42),
])
def test_skip_second_third(my_input, expected):
    if flag == []:
        pytest.skip()
    print("\nBefore clearing flag: ", flag)
    while len(flag) > 0:
        flag.pop()
    print("After clearing flag: ", flag)
    assert eval(my_input) == expected
    flag.append("Previous Test Passed")

... which gives me PASS FAIL SKIP.

Note I changed the initial value of flag.

@pytestbot
Copy link
Contributor Author

Original comment by Brianna Laugher (BitBucket: pfctdayelise, GitHub: pfctdayelise):


This may be of interest to you: holger's answer about incremental or 'step' testing. Although that is designed for test methods on a test class. It doesn't work out of the box for a parametrized test function. It might be able to be tweaked for that...

@pytestbot
Copy link
Contributor Author

Original comment by Ldiary Translations (BitBucket: ldiary, GitHub: ldiary):


Again, thanks a lot!
In that case, we would prefer using the "internalskip", because it would remove the clutter in the parameterize values.
Much appreciated, thank you!

@pytestbot
Copy link
Contributor Author

Original comment by Brianna Laugher (BitBucket: pfctdayelise, GitHub: pfctdayelise):


Just a final note, I think the real 'problem' is that parametrization is done during collection, i.e. I think the skipif is being evaluated before any tests at all have even run. Although in that case you would expect PASS SKIP SKIP, rather than PASS FAIL FAIL. I guess that is a mystery for another day.

@pytestbot
Copy link
Contributor Author

Original comment by holger krekel (BitBucket: hpk42, GitHub: hpk42):


I think the skipif decision is done at test run time. But the skipif(STRING) syntax caches the result of evaluating the string. IOW, it does not presume it's dealing with a condition that could change from test to test. You could try to use a boolean expression but you then need to specify a reason string, i.e. pytest.skipif(flags, reason="previous fail"). I'd recommend to rather use the "xfail" marker, however, because this marker is meant for "expected failures" whereas skipping is meant to deal with platform/dependency/env conditions, i.e. is not related to implementation problems or test failures.

Unless someone objects, i am going to close this issue soon as i don't think we need to change something about how pytest operates here.

@pytestbot
Copy link
Contributor Author

Original comment by Ldiary Translations (BitBucket: ldiary, GitHub: ldiary):


Thanks Holger, I have no objection closing this issue.

Our use case does not allow us to just put an xfail, because as far as I understand, when you mark a test an xfail the test will still be executed... xfail only suppresses failure messages etc.

Our case needs a complete halt or skip of any tests during runtime, because if the previous test failed it means that the backend infrastructure being manipulated by the test is already broken. For us it means that the infrastructure, which we think as similar to platform/dependency/env, is already broken and further manipulations will only cause further damage to the condition of the infrastructure.

Brianna's suggestion of using internal skip works in my environment and fits our purpose though, so I really have no objection about the issue being close.

@pytestbot
Copy link
Contributor Author

Original comment by holger krekel (BitBucket: hpk42, GitHub: hpk42):


Closing the issue but a final note: Instead of the imperative pytest.skip call you can also use pytest.xfail. I still think it's better to reserve skips for platform/dependency mismatches and xfail for implementation failures/dependent test problems.

@pytestbot pytestbot added the type: bug problem that needs to be addressed label Jun 15, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug problem that needs to be addressed
Projects
None yet
Development

No branches or pull requests

1 participant