-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Incremental testing from example in documentation #3125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
GitMate.io thinks the contributor most likely able to help you is @nicoddemus. |
@kam1sh thanks for reporting this. Indeed the sample is not handling parametrization and skipping. We should update the documentation with another example that handle these. |
You may also wish to have a look at pytest-steps if you need a bit more control over the various dependencies inside the class (instead of just monotonic incremental). I am aware that this does not help fixing the pytest doc, but I am quite happy with the result so I'm just willing to share :) |
I have found a discussion on stackoverflow on the issue and posted an answer https://stackoverflow.com/a/57854471/2352026 (that I am using in my tests suite). Maybe a potential solution ? |
@sdementen right now I think the solution could be implement at pytest level some kind of UUID for each test, because remembering test by its full name+fixtures is complicated. But maybe I'm wrong, because I worked with this issue a long ago =/ |
@kam1sh I do not understand immediately how a UUID would be beneficial in this context. Could you elaborate? |
As far as I can tell it's not beneficial in any way |
@sdementen would be good to submit your fixed version (https://stackoverflow.com/questions/39898702/using-mark-incremental-and-metafunc-parametrize-in-a-pytest-test-class/57854471#57854471) as a PR. |
Ok @blueyed , I work on it when I find some spare time |
@blueyed I have pushed a PR. Could you review and tell me if it is okay or need other changes:
|
At first, thank you for pytest. This is great framework for testing with good interface and reporting.
I wanted to separate my test case into steps, for example:
I did it just because it looks nice in report. You know, if one part of scenario fails, you could see which part failed, with no need to read tracebacks.
So, I used code from documentation, and now if one step fails, the next methods will not be executed.
But today I discovered, if methods use parameters or fixture with parameters, weird things starts happening:
As you can see, if once class method has FAILED state, the next steps will be xfailed even with different param. This is the first bug with this code.
The second bug -- next steps also will be xfailed, if current test have been skipped with
@pytest.mark.skip
:I tried to fix first bug manually, but I don't know well pytest API. Eventually, I decided not to use incremental testing at all.
...but I'll be glad if you fix this code in future releases.
My testing environment is: Python 3.6.3, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
The text was updated successfully, but these errors were encountered: