-
Notifications
You must be signed in to change notification settings - Fork 539
RuntimeError: Missing out variants: {'quantized_decomposed::dequantize_per_tensor', 'quantized_decomposed::quantize_per_tensor'} #8369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Here are some related threads:
I think in your case you are quantizing with XNNPack, maybe try forcing |
@jackzhxng
This is the script if you want to reproduce:
|
Try adding |
Doesn't work:
|
Hi @ChristophKarlHeck, it looks like you're attempting to lower to your own custom backend. I think for now ExecuTorch might not have the implementations for those quantized variants. The generally these ops should be consumed by your backend in order to run the model: as the example above, in order to run the quantized linear you match agains the quantized pattern shown above. For now the only way these are run is through XNNPACK delegating and recognizing this pattern as a quantized linear pattern. |
Hi @mcr229, I appreciate any help you can provide. |
For: STM32WB55RG, we don't have arm m-4 support just yet. It should only be supported for portable ops, but we don't have the quantized portable ops for the reasons above. @digantdesai actually has plans for m4 support so perhaps he can shed some light on any workarounds |
|
@digantdesai Thank you! |
Closing this, feel free to reopen if you run into something similar. Good luck! |
🐛 Describe the bug
Hi,
The following code throws the error mentioned in the tittle.
Am I doing anything wrong?
Cheers,
Christoph
Versions
The text was updated successfully, but these errors were encountered: