-
Notifications
You must be signed in to change notification settings - Fork 537
Dynamic Shapes issue #3636
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi, Dynamism of tensor rank isn't supported, so changing tensor rank will require recompiling the entire model. However it looks like in one of these cases you are specifying dynamism on the 1st dimension, and should be resizing from 1x3 -> 1x4 (1x3 is the shape that you captured the model with, and 1x4 is the shape of the input you are giving at inference). The error message's looked flipped as the error message for 1x3 -> 1x4 is giving rank is immutable error log, while the 1x3 -> 4 (your second example) is giving another error. Do you mind sharing the graph of the model you are exporting? you can add |
Hi, from what i understood, the rank of the tensor inputs given (i.e (1,3)) is 2? so (1,4) should also be 2 right? excuse me if this is wrong, I am still learning. The graph output was:
|
yup that's right, I believe one of the examples you are using a (4,) tensor which has rank one, which I believe is why one of the error messages is: As for the second case where you are using (1,4), but the error message is: which looks strange because you are reshaping dimension 1 to a new_size of 4. Can you try using the following executorchbackendconfig when exporting the model: ExecutorchBackendConfig(
sym_shape_eval_pass=ConstraintBasedSymShapeEvalPass(),
) |
This suggests that you are passing a size [4] tensor to the runtime not [1,4]. You also probably need to add 'sym_shape_eval_pass=ConstraintBasedSymShapeEvalPass()' to your ExecutorchBackendConfig to have it use the dynamic_shapes information for the upperbound in memory planning instead of the inputs you passed in. Im working on having this become the default theres just some legacy cases that are making it difficult. |
lol was just about to tag you @JacobSzwejbka |
Thanks for the responses. I tried added in the sym_shape_eval_pass=ConstraintBasedSymShapeEvalPass() to the ExecutorchBackendConfig, like so:
but I still get the following error when supplying a tensor of shape (1, 4):
|
hmm error logs still seems to suggest that we are changing tensor rank, based on the graph we are exporting with input rank two. @JacobSzwejbka any idea here? |
@ismaeelbashir03 are you using the code linked above to export or do you have other local changes now? |
Oh one issue is you need to pass dynamic shape info to torch._export.capture_pre_autograd_graph(model, example_inputs) |
The exact code im using to export is below, the helper functions are from the
|
I changed it to this and still have the issue, am I passing it in correctly?: |
Tried this out locally
and it worked. So it might just be shapes getting messed up somewhere in your flow. |
I see, could this maybe be an issue with how to model is ran in Java? |
I wonder if the tensor.fromblob is working correctly. Perhaps we need to check what the shapes are after creating? Maybe there is a bug there? |
I managed to fix the issue, but it was really weird. I changed the name of the file being outputted, to something else and it ran fine. I then changed it back to the old name and it broke again. it might be something to do with android studio caching the old file since I gave it the same name? thanks for the help though, I really appreciate it 👍. |
I have been playing around with executorch and cant seem to get the dynamic shapes feature working on some toy examples. Whenever I attempt to add dynamic inputs to my model, I cannot seem to make a forward pass in my android app. Originally, I was using the XNNpack delegation but after reading that it does not support dynamic shapes, I removed the XNN partitioner. If anyone can help, I would greatly appreciate it.
compiling to .pte code:
I am doing the forward pass on android like this:
I get the following error in logcat:
I have tried also tried reducing the size by doing:
but this gives another error on logcat:
The text was updated successfully, but these errors were encountered: