Skip to content

Question: How can I parameterize my tests with global variable which changes values on runtime. #3273

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
thakkardharmik opened this issue Mar 1, 2018 · 16 comments
Labels
type: question general question, might be closed after 2 weeks of inactivity

Comments

@thakkardharmik
Copy link

thakkardharmik commented Mar 1, 2018

Question: How can I parameterize my tests with global variable which changes values on runtime.
I have a global list variable which I populate the value in one of the test and I am using this variable to parameterize another test. One way I found out was to use pickle. Is there any other way I can handle this scenario?

Example -

myList = []

def test_1():
    global myList
    myList = [1,2,3]

#This test should print 1,2,3
@pytest.mark.parameterize("num", myList)
def test_2(num):
    print(num)
@pytestbot pytestbot added the type: question general question, might be closed after 2 weeks of inactivity label Mar 1, 2018
@RonnyPfannschmidt
Copy link
Member

parametrization happens before any test is executed, so your example can never work

@nicoddemus
Copy link
Member

nicoddemus commented Mar 1, 2018

(@thakkardharmik I edited your post to include syntax highlighting, it greatly improves readability; for more information please take a look here 👍)

@nicoddemus
Copy link
Member

As @RonnyPfannschmidt mentioned this is not currently possible, furthermore ideally tests should not depend on one another; it is hard to recommend a solution without further details though.

@thakkardharmik
Copy link
Author

@nicoddemus The exact thing I am trying to do it as follow.
I have multiple test modules in one directory that uses one set of data. I have different framework that runs my tests and returns to the results. I am using pytest to publish the result.

def deployDataset():
    #deploys the data

TESTCASES = []
def generate_test():
    # runs some test using different framework and publish the result in some variable
    # TESTCASES = [1,2,3,4]

if <some condition>:
    generate_test()

def test_publish("t", TESTCASES):
    #iterates over all testcase results and publishes the result.

if I try to use deployDataset() as a fixture then the deployDataset is executed after generate_test() method.
if I try to put generate_test as a pytest lets say by renaming it to tests_generate then the deployDataset() is executed before test_generate. But now the challenge is the test results to be published which got stored in variable from test_generate aren't available to test_publish parameterize.

I could loop through the list in test but thats not desirable.
Please let me know if anything is unclear. Thanks

@RonnyPfannschmidt
Copy link
Member

if you run the tests using something entirely else anyway, why use pytest for reporting?

its a rather unnatural use case for a test framewrk, and depending on your needs it may be vastly better to just do direct reporting

@thakkardharmik
Copy link
Author

the reporting is done in pytest to publish report maintaining standard. Is there any way I can achieve this? @RonnyPfannschmidt Thanks

@RonnyPfannschmidt
Copy link
Member

@thakkardharmik what exactly is the "standard" in your case

@thakkardharmik
Copy link
Author

@RonnyPfannschmidt We have infra build which runs pytest and publish report from pytest. Since earliar test were written in pearl this pytest just acts as a wrapper to publish results to the report whereas the actual tests are run elsewhere.

@RonnyPfannschmidt
Copy link
Member

but pytest reports themselves are not a standard, again what is your standard

@nicoddemus
Copy link
Member

JUnit reports perhaps?

@thakkardharmik
Copy link
Author

I think we have local infra that aggregates the test result and publishes it to the dashboard.

@nicoddemus
Copy link
Member

The core of our question is:

publish report from pytest

What do you mean by report exactly? The console output? --junitxml report? --result-log?

Regardless, pytest is really not meant to be used like this, its purpose is to actually run the tests so I'm not sure what you want is possible (although it is not really clear to me what you are trying to accomplish, TBH).

@thakkardharmik
Copy link
Author

--junitxml report
@nicoddemus

@nicoddemus
Copy link
Member

@thakkardharmik thanks for the answer.

Unfortunately this is a very unorthodox use case which is not supported.

@thakkardharmik
Copy link
Author

Thanks @nicoddemus

@RonnyPfannschmidt
Copy link
Member

RonnyPfannschmidt commented Mar 2, 2018

@thakkardharmik in the past i advised people with the need to turn random test data not from pytest to junitxml, by just generating junitxml directly - it was much quicker and much less error prone than using a complete test framework to churn out junitxml for data you already have

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: question general question, might be closed after 2 weeks of inactivity
Projects
None yet
Development

No branches or pull requests

4 participants