-
Notifications
You must be signed in to change notification settings - Fork 537
Android application crash - Missing operator: [4] aten::native_dropout.out #1287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
cc @manuelcandales |
Hi @larryliu0820 thanks. I guess that means that until the issue is verified/fixed I cannot move forward with my work, right? |
For inference optimization we should not really have dropout operator. @kirklandsign can you or someone write a pass to remove dropout? |
We should add this op into one of the existing passes, where we are removing a lot of other ops. On the other hand, is it possible that we are not telling PyTorch we are doing inference? @adonnini can you confirm that you have used m.eval() or torch.no_grad() |
That is the reason why i suggest to have a separate pass, so as to make sure it is used for inference only. |
Yeah if this is the case, point being we should give the user some warning to ask them to do m.eval() instead of removing the op ourselves |
here is my executorch related code from my Android app. As you will see code execution stops after the module is loaded. No other command is executed. Plea refer to the llogcat I sent in a previous comment where it seemsto indicate that the model was loaded. Please let me know what I should do next. Thanks MY CODEDATA PREPARATION
INFERENCE
|
@adonnini we are particularly interested in the AOT code where you prepare the model |
@larryliu0820 I could give you the github for the model I used to base mine on. I did not change it significantly. I only adapted it to use my input dataset. |
Does this mean we don't need to add native_dropout to portable? By the way, that op is currently classified as a core op. Does this mean there are core ops that don't need to be added to the portable library? |
Below is the code to create and save the model using executorch inside the training epoch loop (no validation).
|
Can you add m.eval() after this line m = model.TFModel(...) |
@larryliu0820 I added the line as you asked. However, the code fails at this line
producing the error log below. I seem to remember I had to make a one-line change to an executorch module, I am not sure. The last time I ran this code (Novembre 9) it worked without problems.
|
Manuel, we probably should add the op, but for inference case I am contending we dont need it for this case |
@larryliu0820 If execution completes normally. What should I do next? What was the reason for adding
? |
Yes please use the model after adding m.eval() |
Closing this one since it's resolved. |
Hi,
After loading my model for inference in my Android application via executorch runtime library, the application crashes.
Below you will find the logcat with all the information regarding the crash.
Please let me know if you need any other information, and what I should do next.
Thanks
LOGCAT
The text was updated successfully, but these errors were encountered: