-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Consecutive pytest.mark.skipif inside pytest.mark.parameterize #423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Original comment by Brianna Laugher (BitBucket: pfctdayelise, GitHub: pfctdayelise): Hm, I can't reproduce your results. I copied this into a test file with the latest py.test and got these results:
I note you are running Python 3, I am running 2.7, but I don't know why that would make a difference... shrug Or perhaps something in pytest http://pytest.org/latest/changelog.html#id1 has made an improvement? I would note that constructing tests like this is going to be a little brittle, especially for running them distributed, and even with some selections of -k and -m may have unexpected results. I am going to try installing python 3 and see what results I get there. |
Original comment by Brianna Laugher (BitBucket: pfctdayelise, GitHub: pfctdayelise): I made a new virtualenv with python3.3 and installed pytest from pip (2.5.1). I still get the same results - PASS, FAIL, SKIP. So either it is a change between python 3.3.0 and 3.3.3, a change between pytest 2.5.0 and 2.5.1, or you have some local strangeness happening. :(
|
Original comment by Ldiary Translations (BitBucket: ldiary, GitHub: ldiary): Thanks a lot Brianna for the follow-up, have you tried running the "skipsecondthird" mark? Because what I notice is your running the "skipthird" which also runs fine in my machine. Our tests needs to be able to run like the "skipsecondthird" though, because if one of the parameters fails, we certainly know that our system is broken and we don't want to run the remaining tests for the parameterized value. So we need to have consecutive skipifs inside parameterized. |
Original comment by Brianna Laugher (BitBucket: pfctdayelise, GitHub: pfctdayelise): Argh, I'm sorry. That was a reading comprehension fail. I also get the results PASS FAIL FAIL for "skipsecondthird". I just tried "unrolling" the tests, having three separate test functions with skipif statements on the last two. Same results. Which indicates it is a problem with skipif in general rather than the fact it is being used inside parametrize. However, I found using an imperative skip gives the results you are after:
... which gives me PASS FAIL SKIP. Note I changed the initial value of |
Original comment by Brianna Laugher (BitBucket: pfctdayelise, GitHub: pfctdayelise): This may be of interest to you: holger's answer about incremental or 'step' testing. Although that is designed for test methods on a test class. It doesn't work out of the box for a parametrized test function. It might be able to be tweaked for that... |
Original comment by Brianna Laugher (BitBucket: pfctdayelise, GitHub: pfctdayelise): Just a final note, I think the real 'problem' is that parametrization is done during collection, i.e. I think the skipif is being evaluated before any tests at all have even run. Although in that case you would expect PASS SKIP SKIP, rather than PASS FAIL FAIL. I guess that is a mystery for another day. |
Original comment by holger krekel (BitBucket: hpk42, GitHub: hpk42): I think the skipif decision is done at test run time. But the skipif(STRING) syntax caches the result of evaluating the string. IOW, it does not presume it's dealing with a condition that could change from test to test. You could try to use a boolean expression but you then need to specify a reason string, i.e. Unless someone objects, i am going to close this issue soon as i don't think we need to change something about how pytest operates here. |
Original comment by Ldiary Translations (BitBucket: ldiary, GitHub: ldiary): Thanks Holger, I have no objection closing this issue. Our use case does not allow us to just put an xfail, because as far as I understand, when you mark a test an xfail the test will still be executed... xfail only suppresses failure messages etc. Our case needs a complete halt or skip of any tests during runtime, because if the previous test failed it means that the backend infrastructure being manipulated by the test is already broken. For us it means that the infrastructure, which we think as similar to platform/dependency/env, is already broken and further manipulations will only cause further damage to the condition of the infrastructure. Brianna's suggestion of using internal skip works in my environment and fits our purpose though, so I really have no objection about the issue being close. |
Original comment by holger krekel (BitBucket: hpk42, GitHub: hpk42): Closing the issue but a final note: Instead of the imperative |
Originally reported by: Ldiary Translations (BitBucket: ldiary, GitHub: ldiary)
Given the test module below which was derived from the documentation example:
http://pytest.org/latest/parametrize.html#pytest-mark-parametrize-parametrizing-test-functions
The test marks "default", "skipsecond" and "skipthird" works as expected, but the test mark "skipsecondthird" has strange behaviour.
We can't understand why when the "skipsecondthird" test mark is executed like below, the last test was still executed three times. We are expecting that the first test execution will be PASSED, the second test execution will FAILED, and the third execution will be SKIPPED; but it wasn't.
Are consecutive skipif inside parameterize supported on the first place?
Or is this a bug?
The text was updated successfully, but these errors were encountered: