-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Question: How can I parameterize my tests with global variable which changes values on runtime. #3273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
parametrization happens before any test is executed, so your example can never work |
(@thakkardharmik I edited your post to include syntax highlighting, it greatly improves readability; for more information please take a look here 👍) |
As @RonnyPfannschmidt mentioned this is not currently possible, furthermore ideally tests should not depend on one another; it is hard to recommend a solution without further details though. |
@nicoddemus The exact thing I am trying to do it as follow. def deployDataset():
#deploys the data
TESTCASES = []
def generate_test():
# runs some test using different framework and publish the result in some variable
# TESTCASES = [1,2,3,4]
if <some condition>:
generate_test()
def test_publish("t", TESTCASES):
#iterates over all testcase results and publishes the result. if I try to use deployDataset() as a fixture then the deployDataset is executed after generate_test() method. I could loop through the list in test but thats not desirable. |
if you run the tests using something entirely else anyway, why use pytest for reporting? its a rather unnatural use case for a test framewrk, and depending on your needs it may be vastly better to just do direct reporting |
the reporting is done in pytest to publish report maintaining standard. Is there any way I can achieve this? @RonnyPfannschmidt Thanks |
@thakkardharmik what exactly is the "standard" in your case |
@RonnyPfannschmidt We have infra build which runs pytest and publish report from pytest. Since earliar test were written in pearl this pytest just acts as a wrapper to publish results to the report whereas the actual tests are run elsewhere. |
but pytest reports themselves are not a standard, again what is your standard |
JUnit reports perhaps? |
I think we have local infra that aggregates the test result and publishes it to the dashboard. |
The core of our question is:
What do you mean by report exactly? The console output? Regardless, pytest is really not meant to be used like this, its purpose is to actually run the tests so I'm not sure what you want is possible (although it is not really clear to me what you are trying to accomplish, TBH). |
--junitxml report |
@thakkardharmik thanks for the answer. Unfortunately this is a very unorthodox use case which is not supported. |
Thanks @nicoddemus |
@thakkardharmik in the past i advised people with the need to turn random test data not from pytest to junitxml, by just generating junitxml directly - it was much quicker and much less error prone than using a complete test framework to churn out junitxml for data you already have |
Question: How can I parameterize my tests with global variable which changes values on runtime.
I have a global list variable which I populate the value in one of the test and I am using this variable to parameterize another test. One way I found out was to use pickle. Is there any other way I can handle this scenario?
Example -
The text was updated successfully, but these errors were encountered: