Skip to content

Fix accept mechanism in tests #3721

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 26, 2021
Merged

Conversation

NicolasHug
Copy link
Member

This PR addresses #3697 (comment)

I removed the --accept option which doesn't seem to play out really well with pytest, and it's redundant with the already-supported env variable anyway.

Test case below where I delete the pkl file for alexnet, get an expected error, then use the suggested fix in the error message and re-run the test, which passes (sorry it doesn't render super well):

(pt) ➜  vision git:(mungedid) ✗ mv test/expect/ModelTester.test_alexnet_expect.pkl ~
(pt) ➜  vision git:(mungedid) ✗ pytest test/test_models.py -x -k alex --tb=short
============================================================================== test session starts ===============================================================================
platform darwin -- Python 3.8.8, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /Users/nicolashug/dev/vision
plugins: cov-2.11.1
collected 67 items / 66 deselected / 1 selected

test/test_models.py F                                                                                                                                                      [100%]

==================================================================================== FAILURES ====================================================================================
____________________________________________________________________ test_classification_model[dev0-alexnet] _____________________________________________________________________
test/test_models.py:438: in test_classification_model
    ModelTester()._test_classification_model(model_name, input_shape, dev)
test/test_models.py:84: in _test_classification_model
    self.assertExpected(out.cpu(), name, prec=0.1)
test/common_utils.py:126: in assertExpected
    expected_file = self._get_expected_file(name)
test/common_utils.py:109: in _get_expected_file
    raise RuntimeError(
E   RuntimeError: No expect file exists for ModelTester.test_alexnet_expect.pkl in /Users/nicolashug/dev/vision/test/expect/ModelTester.test_alexnet_expect.pkl; to accept the current output, re-run the failing test after setting the EXPECTTEST_ACCEPT env variable. For example: EXPECTTEST_ACCEPT=1 pytest test/test_models.py -k alexnet
============================================================================ short test summary info =============================================================================
FAILED test/test_models.py::test_classification_model[dev0-alexnet] - RuntimeError: No expect file exists for ModelTester.test_alexnet_expect.pkl in /Users/nicolashug/dev/visi...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
======================================================================== 1 failed, 66 deselected in 1.06s ========================================================================




(pt) ➜  vision git:(mungedid) ✗ EXPECTTEST_ACCEPT=1 pytest test/test_models.py -k alexnet
============================================================================== test session starts ===============================================================================
platform darwin -- Python 3.8.8, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /Users/nicolashug/dev/vision
plugins: cov-2.11.1
collected 67 items / 66 deselected / 1 selected

test/test_models.py .                                                                                                                                                      [100%]

================================================================================ warnings summary ================================================================================
test/test_models.py::test_classification_model[dev0-alexnet]
  /Users/nicolashug/dev/vision/test/common_utils.py:270: RuntimeWarning: The check_jit_scriptable test for AlexNet was skipped. This test checks if the module's results in TorchScript match eager and that it can be exported. To run these tests make sure you set the environment variable PYTORCH_TEST_WITH_SLOW=1 and that the test is not manually skipped.
    warnings.warn(msg, RuntimeWarning)

-- Docs: https://docs.pytest.org/en/stable/warnings.html
================================================================== 1 passed, 66 deselected, 1 warning in 0.93s ===================================================================

Copy link
Contributor

@datumbox datumbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I'm always using the env var, so if the pytest does not play well with arguments I don't have issues removing it.

@NicolasHug
Copy link
Member Author

if the pytest does not play well with arguments I don't have issues removing it.

To be perfeclty honest there probably is a way, but since the env is working fine I didn't bother to try

@NicolasHug NicolasHug merged commit 283c790 into pytorch:master Apr 26, 2021
@NicolasHug NicolasHug deleted the mungedid branch April 26, 2021 09:06
facebook-github-bot pushed a commit that referenced this pull request May 4, 2021
Reviewed By: NicolasHug

Differential Revision: D28169159

fbshipit-source-id: ebc5955e62c1f6184124ccd3725bc3ebcd3325ab
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants