Skip to content

Add a test that compares the output of our quantized models against expected cached values #4597

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 22 commits into from
Oct 13, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
e426532
adding tests to check output of quantized models
jdsgomes Oct 11, 2021
052eb6a
adding test quantized model weights
jdsgomes Oct 11, 2021
09f0cff
merge test_new_quantized_classification_model with test_quantized_cla…
jdsgomes Oct 12, 2021
7ab98ad
adding skipif removed by mistake
jdsgomes Oct 12, 2021
45677b2
addressing comments from PR
jdsgomes Oct 12, 2021
032eb85
removing unused argument
jdsgomes Oct 12, 2021
3683176
Merge branch 'main' into output-checks-on-quant-models
jdsgomes Oct 12, 2021
edd4e9f
fixing lint errors
jdsgomes Oct 12, 2021
918b7e6
changing model to eval model and updating weights
jdsgomes Oct 12, 2021
f247671
Update test/test_models.py
jdsgomes Oct 12, 2021
87828a0
enforce single test in circleci
jdsgomes Oct 12, 2021
9d62cab
changing random seed
jdsgomes Oct 12, 2021
8d62f65
Merge branch 'output-checks-on-quant-models' of github.com:jdsgomes/v…
jdsgomes Oct 12, 2021
8a1da06
updating weights for new seed
jdsgomes Oct 12, 2021
e396ea5
adding missing empty line
jdsgomes Oct 12, 2021
91e51ed
try 128 random seed
jdsgomes Oct 12, 2021
8213667
try 256 random seed
jdsgomes Oct 12, 2021
34d61c6
try 16 random seed
jdsgomes Oct 12, 2021
17a74ce
disable inception_v3 input/output quantization tests
jdsgomes Oct 12, 2021
442f561
removing ModelTester.test_inception_v3_quantized_expect.pkl
jdsgomes Oct 12, 2021
1fbd36e
reverting temporary ci run_test.sh changes
jdsgomes Oct 12, 2021
2d2d8d1
Merge branch 'main' into output-checks-on-quant-models
datumbox Oct 13, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
16 changes: 15 additions & 1 deletion test/test_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -220,6 +220,11 @@ def _check_input_backprop(model, inputs):
"maskrcnn_resnet50_fpn",
)

# The tests for the following quantized models are flaky possibly due to inconsistent
# rounding errors in different platforms. For this reason the input/output consistency
# tests under test_quantized_classification_model will be skipped for the following models.
quantized_flaky_models = ("inception_v3",)


# The following contains configuration parameters for all models which are used by
# the _test_*_model methods.
Expand Down Expand Up @@ -687,7 +692,9 @@ def test_video_model(model_name, dev):
)
@pytest.mark.parametrize("model_name", get_available_quantizable_models())
def test_quantized_classification_model(model_name):
set_rng_seed(0)
defaults = {
"num_classes": 5,
"input_shape": (1, 3, 224, 224),
"pretrained": False,
"quantize": True,
Expand All @@ -697,8 +704,15 @@ def test_quantized_classification_model(model_name):

# First check if quantize=True provides models that can run with input data
model = torchvision.models.quantization.__dict__[model_name](**kwargs)
model.eval()
x = torch.rand(input_shape)
model(x)
out = model(x)

if model_name not in quantized_flaky_models:
_assert_expected(out, model_name + "_quantized", prec=0.1)
assert out.shape[-1] == 5
_check_jit_scriptable(model, (x,), unwrapper=script_model_unwrapper.get(model_name, None))
_check_fx_compatible(model, x)

kwargs["quantize"] = False
for eval_mode in [True, False]:
Expand Down