You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I was using the Qualcomm-quantized LLaMa 3 8B model, I found that if I used the sharding number, corresponding to lines 779 and 797 in the file example/model/export_llama_lib.py, I needed to import 'canonicalize_program' from 'executorch.backends.qualcomm.utils.utils'. However, there is no module named 'canonicalize_program' in the corresponding Qualcomm backend utils.py. Where could the issue be? https://github.com/pytorch/executorch/blob/main/backends/qualcomm/utils/utils.py
thanks and Merry Christmas!
Versions
v0.5 main branch
The text was updated successfully, but these errors were encountered:
🐛 Describe the bug
When I was using the Qualcomm-quantized LLaMa 3 8B model, I found that if I used the sharding number, corresponding to lines 779 and 797 in the file example/model/export_llama_lib.py, I needed to import 'canonicalize_program' from 'executorch.backends.qualcomm.utils.utils'. However, there is no module named 'canonicalize_program' in the corresponding Qualcomm backend utils.py. Where could the issue be?
https://github.com/pytorch/executorch/blob/main/backends/qualcomm/utils/utils.py
thanks and Merry Christmas!
Versions
v0.5 main branch
The text was updated successfully, but these errors were encountered: