-
Notifications
You must be signed in to change notification settings - Fork 7.1k
Adding multiweight support to Quantized MobileNetV2 and MobileNetV3 #4859
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding multiweight support to Quantized MobileNetV2 and MobileNetV3 #4859
Conversation
💊 CI failures summary and remediationsAs of commit 9988a6c (more details on the Dr. CI page):
1 failure not recognized by patterns:
This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a few clarifications to assist review:
@@ -4,8 +4,7 @@ | |||
from .googlenet import * | |||
from .inception import * | |||
from .mnasnet import * | |||
from .mobilenetv2 import * | |||
from .mobilenetv3 import * | |||
from .mobilenet import * |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For BC reasons, we need to maintain the .mobilenet
space.
kwargs["num_classes"] = len(weights.meta["categories"]) | ||
if "backend" in weights.meta: | ||
kwargs["backend"] = weights.meta["backend"] | ||
backend = kwargs.pop("backend", "qnnpack") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The backend here is qnnpack
] | ||
|
||
|
||
def _mobilenet_v3_model( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kept this method instead on dumping everything in the public method because on the future we might want to support also the _small
version of the model and this method will remain unchanged.
|
||
if quantize: | ||
torch.quantization.convert(model, inplace=True) | ||
model.eval() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check carefully lines 44-54 comparing to original. This is a simplification of the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me!
…leNetV3 (#4859) Summary: * Adding multiweight suport on Quant MobileNetV2 and MobileNetV3. * Fixing enum name. * Fixing lint. Reviewed By: kazhang Differential Revision: D32216681 fbshipit-source-id: 60e2d3c02508c65ec865e603c7155a74cb6bd8b3
…ytorch#4859) * Adding multiweight suport on Quant MobileNetV2 and MobileNetV3. * Fixing enum name. * Fixing lint.
Fixes #4674
Verified with:
cc @bjuncek