Skip to content

Android app fails with ETensor rank is immutable error #1306

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
adonnini opened this issue Nov 29, 2023 · 12 comments
Closed

Android app fails with ETensor rank is immutable error #1306

adonnini opened this issue Nov 29, 2023 · 12 comments

Comments

@adonnini
Copy link

Hi,
Model loading completes successfully. The application fails when attempting to execute the following:

outputTensor = mModule.forward(from(arrDataPytorch)).toTensor();

where arrDataPytorch is produced s follows:

			float[] flat = flatten(tmpData);
			final long[] shapeArrDataPytorchFlattened = new long[]{1, flat.length};
			arrDataPytorch = Tensor.fromBlob(flat, shapeArrDataPytorchFlattened);
			locationInformationNeuralNetworkFeaturesFlattenedPytorch = Tensor.fromBlob(flat, shapeArrDataPytorchFlattened);

Below you will find the complete logcat.

Please keep in mind that this code runs to completion when using pytorch mobile runtime.

Please let me know if you need any additional informaion.

Thanks

LOGCAT

11-29 07:11:44.286: I/NeuralNetworkService(4977): - NeuralNetworkServiceRunnable - About to run neuralNetworkloadAndRun ---
11-29 07:11:44.286: I/NeuralNetworkService(4977): - neuralNetworkloadAndRunPytorch - Running -
11-29 07:11:44.286: I/NeuralNetworkService(4977): - neuralNetworkloadAndRunPytorch - locationInformationDir - /data/user/0/com.android.contextq/files/locationInformation/
11-29 07:11:44.286: I/NeuralNetworkService(4977): - neuralNetworkloadAndRunPytorch - savedNetworkArchiveLength - 120669888
11-29 07:11:44.287: I/NeuralNetworkService(4977): - neuralNetworkloadAndRunPytorch - Abut to load module ---
11-29 07:11:44.465: I/ETLOG(4977): Model file /data/user/0/com.android.contextq/files/locationInformation/tfmodel_exnnpack.pte is loaded.
11-29 07:11:44.465: I/ETLOG(4977): Setting up planned buffer 0, size 23366800.
11-29 07:11:44.485: W/libc(4977): Access denied finding property "ro.hardware.chipname"
11-29 07:11:44.512: D/XNNPACK(4977): allocated 6144 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.513: D/XNNPACK(4977): created workspace of size 774176
11-29 07:11:44.514: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.531: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.540: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.545: D/XNNPACK(4977): created workspace of size 387104
11-29 07:11:44.545: D/XNNPACK(4977): reusing tensor id #4 memory for tensor id #3 Node #2 Softmax
11-29 07:11:44.545: D/XNNPACK(4977): created workspace of size 42368
11-29 07:11:44.546: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.564: I/XNNPACK(4977): fuse Clamp Node #1 into upstream Node #0
11-29 07:11:44.567: D/XNNPACK(4977): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.638: D/XNNPACK(4977): allocated 4196352 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.709: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.727: D/XNNPACK(4977): created workspace of size 387104
11-29 07:11:44.727: D/XNNPACK(4977): reusing tensor id #4 memory for tensor id #3 Node #2 Softmax
11-29 07:11:44.727: D/XNNPACK(4977): created workspace of size 42368
11-29 07:11:44.728: I/XNNPACK(4977): fuse Clamp Node #1 into upstream Node #0
11-29 07:11:44.731: D/XNNPACK(4977): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.800: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.812: D/XNNPACK(4977): created workspace of size 387104
11-29 07:11:44.812: D/XNNPACK(4977): reusing tensor id #4 memory for tensor id #3 Node #2 Softmax
11-29 07:11:44.812: D/XNNPACK(4977): created workspace of size 42368
11-29 07:11:44.812: I/XNNPACK(4977): fuse Clamp Node #1 into upstream Node #0
11-29 07:11:44.814: D/XNNPACK(4977): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.846: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.851: D/XNNPACK(4977): created workspace of size 387104
11-29 07:11:44.851: D/XNNPACK(4977): reusing tensor id #4 memory for tensor id #3 Node #2 Softmax
11-29 07:11:44.851: D/XNNPACK(4977): created workspace of size 42368
11-29 07:11:44.851: I/XNNPACK(4977): fuse Clamp Node #1 into upstream Node #0
11-29 07:11:44.853: D/XNNPACK(4977): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.869: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.873: D/XNNPACK(4977): created workspace of size 387104
11-29 07:11:44.874: D/XNNPACK(4977): reusing tensor id #4 memory for tensor id #3 Node #2 Softmax
11-29 07:11:44.874: D/XNNPACK(4977): created workspace of size 42368
11-29 07:11:44.874: I/XNNPACK(4977): fuse Clamp Node #1 into upstream Node #0
11-29 07:11:44.875: D/XNNPACK(4977): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.892: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.902: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.912: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.916: D/XNNPACK(4977): allocated 8192 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.916: D/XNNPACK(4977): created workspace of size 1327136
11-29 07:11:44.917: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.921: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.925: D/XNNPACK(4977): reusing tensor id #8 memory for tensor id #5 Node #2 Softmax
11-29 07:11:44.925: D/XNNPACK(4977): created workspace of size 42368
11-29 07:11:44.925: D/XNNPACK(4977): created workspace of size 663584
11-29 07:11:44.926: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.930: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.934: I/XNNPACK(4977): fuse Clamp Node #2 into upstream Node #1
11-29 07:11:44.936: D/XNNPACK(4977): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.952: D/XNNPACK(4977): allocated 4196352 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.968: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.972: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.976: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.980: D/XNNPACK(4977): created workspace of size 387104
11-29 07:11:44.981: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.985: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:44.989: I/XNNPACK(4977): fuse Clamp Node #1 into upstream Node #0
11-29 07:11:44.991: D/XNNPACK(4977): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.008: D/XNNPACK(4977): allocated 4196352 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.024: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.028: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.032: D/XNNPACK(4977): created workspace of size 663584
11-29 07:11:45.033: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.037: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.041: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.045: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.050: D/XNNPACK(4977): created workspace of size 387104
11-29 07:11:45.050: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.054: I/XNNPACK(4977): fuse Clamp Node #1 into upstream Node #0
11-29 07:11:45.056: D/XNNPACK(4977): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.071: D/XNNPACK(4977): created workspace of size 663584
11-29 07:11:45.071: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.075: D/XNNPACK(4977): created workspace of size 387104
11-29 07:11:45.076: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.080: I/XNNPACK(4977): fuse Clamp Node #1 into upstream Node #0
11-29 07:11:45.081: D/XNNPACK(4977): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.096: D/XNNPACK(4977): created workspace of size 663584
11-29 07:11:45.097: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.101: D/XNNPACK(4977): created workspace of size 387104
11-29 07:11:45.101: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.113: I/XNNPACK(4977): fuse Clamp Node #1 into upstream Node #0
11-29 07:11:45.116: D/XNNPACK(4977): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.161: D/XNNPACK(4977): created workspace of size 663584
11-29 07:11:45.162: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.167: D/XNNPACK(4977): created workspace of size 387104
11-29 07:11:45.168: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.172: I/XNNPACK(4977): fuse Clamp Node #1 into upstream Node #0
11-29 07:11:45.195: D/XNNPACK(4977): created workspace of size 663584
11-29 07:11:45.196: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.200: D/XNNPACK(4977): created workspace of size 387104
11-29 07:11:45.200: D/XNNPACK(4977): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.217: I/XNNPACK(4977): fuse Clamp Node #1 into upstream Node #0
11-29 07:11:45.219: D/XNNPACK(4977): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.235: D/XNNPACK(4977): allocated 16416 bytes for packed weights in Fully Connected (NC, F32) operator
11-29 07:11:45.255: I/NeuralNetworkService(4977): - neuralNetworkloadAndRunPytorch - Abut to run inference ---
11-29 07:11:45.255: I/ETLOG(4977): ETensor rank is immutable old: 3 new: 2
11-29 07:11:45.255: I/ETLOG(4977): Error setting input 0: 0x10
11-29 07:11:45.255: I/ETLOG(4977): In function forward(), assert failed: set_input_status == Error::Ok
11-29 07:11:45.255: A/libc(4977): Fatal signal 6 (SIGABRT), code -1 (SI_QUEUE) in tid 4996 (Thread-2), pid 4977 (lNetworkService)
11-29 07:11:45.295: I/ActivityManager(1757): Skipping next CPU consuming process, not a java proc: 4977
11-29 07:11:45.295: I/ActivityManager(1757): Skipping next CPU consuming process, not a java proc: 1757
11-29 07:11:45.295: I/ActivityManager(1757): Skipping next CPU consuming process, not a java proc: 536
11-29 07:11:45.295: I/ActivityManager(1757): Skipping next CPU consuming process, not a java proc: 583
11-29 07:11:45.295: I/ActivityManager(1757): Skipping next CPU consuming process, not a java proc: 26436
11-29 07:11:45.295: I/ActivityManager(1757): Skipping next CPU consuming process, not a java proc: 13
11-29 07:11:45.295: I/ActivityManager(1757): Skipping next CPU consuming process, not a java proc: 355
11-29 07:11:45.295: I/ActivityManager(1757): Skipping next CPU consuming process, not a java proc: 2166
11-29 07:11:45.295: I/ActivityManager(1757): Skipping next CPU consuming process, not a java proc: 4262
11-29 07:11:45.295: I/ActivityManager(1757): Skipping next CPU consuming process, not a java proc: 5249
11-29 07:11:45.297: I/ActivityManager(1757): Dumping to /data/anr/anr_2023-11-29-07-11-45-296
11-29 07:11:45.297: I/ActivityManager(1757): Collecting stacks for pid 4977
11-29 07:11:45.298: I/system_server(1757): libdebuggerd_client: started dumping process 4977
11-29 07:11:45.299: I/tombstoned(686): registered intercept for pid 4977 and type kDebuggerdJavaBacktrace
11-29 07:11:45.300: I/lNetworkServic(4977): Thread[3,tid=4982,WaitingInMainSignalCatcherLoop,Thread*=0xb400006f52155f50,peer=0x16c80000,"Signal Catcher"]: reacting to signal 3
11-29 07:11:45.341: I/[email protected](984): uevent triggered for sensor: cpu-1-0-usr
11-29 07:11:45.341: I/[email protected](984): sensor: cpu-1-0-usr temperature: 89.6 old: 3 new: 0
11-29 07:11:45.341: I/[email protected](984): uevent triggered for sensor: cpu-1-0-usr
11-29 07:11:45.341: I/[email protected](984): uevent triggered for sensor: cpu-1-0-usr
11-29 07:11:45.341: I/[email protected](984): uevent triggered for sensor: cpu-1-0-usr
11-29 07:11:45.372: I/crash_dump64(5455): obtaining output fd from tombstoned, type: kDebuggerdTombstoneProto
11-29 07:11:45.372: I/tombstoned(686): received crash request for pid 4996
11-29 07:11:45.373: I/crash_dump64(5455): performing dump of process 4977 (target tid = 4996)
11-29 07:11:45.389: E/DEBUG(5455): failed to read /proc/uptime: Permission denied
11-29 07:11:45.735: A/DEBUG(5455): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
11-29 07:11:45.735: A/DEBUG(5455): Build fingerprint: 'Fairphone/FP4eea/FP4:12/SKQ1.220201.001/SP2K:user/release-keys'
11-29 07:11:45.735: A/DEBUG(5455): Revision: '0'
11-29 07:11:45.735: A/DEBUG(5455): ABI: 'arm64'
11-29 07:11:45.735: A/DEBUG(5455): Timestamp: 2023-11-29 07:11:45.388776052+0100
11-29 07:11:45.735: A/DEBUG(5455): Process uptime: 0s
11-29 07:11:45.735: A/DEBUG(5455): Cmdline: com.android.contextq:ContextQNeuralNetworkService
11-29 07:11:45.735: A/DEBUG(5455): pid: 4977, tid: 4996, name: Thread-2 >>> com.android.contextq:ContextQNeuralNetworkService <<<
11-29 07:11:45.735: A/DEBUG(5455): uid: 10207
11-29 07:11:45.735: A/DEBUG(5455): signal 6 (SIGABRT), code -1 (SI_QUEUE), fault addr --------
11-29 07:11:45.735: A/DEBUG(5455): x0 0000000000000000 x1 0000000000001384 x2 0000000000000006 x3 0000006dbded3f50
11-29 07:11:45.735: A/DEBUG(5455): x4 60651f7371647272 x5 60651f7371647272 x6 60651f7371647272 x7 7f7f7f7f7f7f7f7f
11-29 07:11:45.735: A/DEBUG(5455): x8 00000000000000f0 x9 c373882bbb12ef11 x10 0000000000000000 x11 ffffff80fffffbdf
11-29 07:11:45.735: A/DEBUG(5455): x12 0000000000000001 x13 0000000000000044 x14 0000006dbded3868 x15 0000000034155555
11-29 07:11:45.735: A/DEBUG(5455): x16 00000070f95c0060 x17 00000070f959c560 x18 0000006d58226000 x19 0000000000001371
11-29 07:11:45.735: A/DEBUG(5455): x20 0000000000001384 x21 00000000ffffffff x22 00000070fffb6298 x23 00000070fffb6298
11-29 07:11:45.735: A/DEBUG(5455): x24 0000006dbded45f0 x25 b400006f22150930 x26 0000000000002072 x27 00000070fffb5e78
11-29 07:11:45.735: A/DEBUG(5455): x28 0000006dbded44c0 x29 0000006dbded3fd0
11-29 07:11:45.735: A/DEBUG(5455): lr 00000070f954c95c sp 0000006dbded3f30 pc 00000070f954c988 pst 0000000000001000
11-29 07:11:45.735: A/DEBUG(5455): backtrace:
11-29 07:11:45.735: A/DEBUG(5455): #00 pc 0000000000051988 /apex/com.android.runtime/lib64/bionic/libc.so (abort+168) (BuildId: 369edc656806aeaf384cbeb8f7a347af)
11-29 07:11:45.735: A/DEBUG(5455): #1 pc 0000000000b95590 /data/app/~~inTvnzzPi2yrpfql0lFTPg==/com.android.contextq-0ebj2jXn0VMel1RNm-xTzw==/base.apk!libexecutorchdemo.so (et_pal_abort+8) (BuildId: 8065dc692f8e345f80fe49a1f2162d7e784b3499)
11-29 07:11:45.735: A/DEBUG(5455): #2 pc 0000000000b95398 /data/app/~~inTvnzzPi2yrpfql0lFTPg==/com.android.contextq-0ebj2jXn0VMel1RNm-xTzw==/base.apk!libexecutorchdemo.so (torch::executor::runtime_abort()+8) (BuildId: 8065dc692f8e345f80fe49a1f2162d7e784b3499)
11-29 07:11:45.735: A/DEBUG(5455): #3 pc 0000000000b72dac /data/app/~~inTvnzzPi2yrpfql0lFTPg==/com.android.contextq-0ebj2jXn0VMel1RNm-xTzw==/base.apk!libexecutorchdemo.so (executorch_jni::ExecuTorchJni::forward(facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::JArrayClass<facebook::jni::detail::JTypeFor<executorch_jni::JEValue, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::detail::JTypeArray, void>::_javaobject*>)+596) (BuildId: 8065dc692f8e345f80fe49a1f2162d7e784b3499)
11-29 07:11:45.735: A/DEBUG(5455): #4 pc 0000000000b73384 /data/app/~~inTvnzzPi2yrpfql0lFTPg==/com.android.contextq-0ebj2jXn0VMel1RNm-xTzw==/base.apk!libexecutorchdemo.so (facebook::jni::detail::MethodWrapper<facebook::jni::basic_strong_ref<executorch_jni::JEValue, facebook::jni::LocalReferenceAllocator> (executorch_jni::ExecuTorchJni::)(facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::JArrayClass<facebook::jni::detail::JTypeFor<executorch_jni::JEValue, facebook::jni::JObject, void>::_javaobject>, facebook::jni::detail::JTypeArray, void>::_javaobject*>), &(executorch_jni::ExecuTorchJni::forward(facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::JArrayClass<facebook::jni::detail::JTypeFor<executorch_jni::JEValue, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::detail::JTypeArray, void>::_javaobject*>)), executorch_jni::ExecuTorchJni, facebook::jni::basic_strong_ref<executorch_jni::JEValue, facebook::jni::LocalReferenceAllocator>, facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::JArrayClass<facebook::jni::detail::JTypeFor<executorch_jni::JEValue, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::detail::JTypeArray, void>::_javaobject*> >::dispatch(facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::HybridClass<executorch_jni::ExecuTorchJni, facebook::jni::detail::BaseHybridClass>::JavaPart, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::JArrayClass<facebook::jni::detail::JTypeFor<executorch_jni::JEValue, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::detail::JTypeArray, void>::_javaobject*>&&)+236) (BuildId: 8065dc692f8e345f80fe49a1f2162d7e784b3499)
11-29 07:11:45.735: A/DEBUG(5455): #5 pc 0000000000b7ba64 /data/app/~~inTvnzzPi2yrpfql0lFTPg==/com.android.contextq-0ebj2jXn0VMel1RNm-xTzw==/base.apk!libexecutorchdemo.so (facebook::jni::detail::CallWithJniConversions<facebook::jni::basic_strong_ref<executorch_jni::JEValue, facebook::jni::LocalReferenceAllocator> ()(facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::HybridClass<executorch_jni::ExecuTorchJni, facebook::jni::detail::BaseHybridClass>::JavaPart, facebook::jni::JObject, void>::_javaobject>, facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::JArrayClass<facebook::jni::detail::JTypeFor<executorch_jni::JEValue, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::detail::JTypeArray, void>::_javaobject*>&&), facebook::jni::basic_strong_ref<executorch_jni::JEValue, facebook::jni::LocalReferenceAllocator>, facebook::jni::detail::JTypeFor<facebook::jni::HybridClass<executorch_jni::ExecuTorchJni, facebook::jni::detail::BaseHybridClass>::JavaPart, facebook::jni::JObject, void>::_javaobject*, facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::JArrayClass<facebook::jni::detail::JTypeFor<executorch_jni::JEValue, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::detail::JTypeArray, void>::_javaobject*> >::call(facebook::jni::detail::JTypeFor<facebook::jni::HybridClass<executorch_jni::ExecuTorchJni, facebook::jni::detail::BaseHybridClass>::JavaPart, facebook::jni::JObject, void>::_javaobject*, facebook::jni::detail::JTypeFor<facebook::jni::JArrayClass<facebook::jni::detail::JTypeFor<executorch_jni::JEValue, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::detail::JTypeArray, void>::_javaobject*, facebook::jni::basic_strong_ref<executorch_jni::JEValue, facebook::jni::LocalReferenceAllocator> ()(facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::HybridClass<executorch_jni::ExecuTorchJni, facebook::jni::detail::BaseHybridClass>::JavaPart, facebook::jni::JObject, void>::_javaobject>, facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::JArrayClass<facebook::jni::detail::JTypeFor<executorch_jni::JEValue, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::detail::JTypeArray, void>::_javaobject*>&&))+96) (BuildId: 8065dc692f8e345f80fe49a1f2162d7e784b3499)
11-29 07:11:45.735: A/DEBUG(5455): #6 pc 0000000000b731b4 /data/app/~~inTvnzzPi2yrpfql0lFTPg==/com.android.contextq-0ebj2jXn0VMel1RNm-xTzw==/base.apk!libexecutorchdemo.so (facebook::jni::detail::FunctionWrapper<facebook::jni::basic_strong_ref<executorch_jni::JEValue, facebook::jni::LocalReferenceAllocator> ()(facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::HybridClass<executorch_jni::ExecuTorchJni, facebook::jni::detail::BaseHybridClass>::JavaPart, facebook::jni::JObject, void>::_javaobject>, facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::JArrayClass<facebook::jni::detail::JTypeFor<executorch_jni::JEValue, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::detail::JTypeArray, void>::_javaobject*>&&), facebook::jni::detail::JTypeFor<facebook::jni::HybridClass<executorch_jni::ExecuTorchJni, facebook::jni::detail::BaseHybridClass>::JavaPart, facebook::jni::JObject, void>::_javaobject*, facebook::jni::basic_strong_ref<executorch_jni::JEValue, facebook::jni::LocalReferenceAllocator>, facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::JArrayClass<facebook::jni::detail::JTypeFor<executorch_jni::JEValue, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::detail::JTypeArray, void>::_javaobject*> >::call(_JNIEnv*, _jobject*, facebook::jni::detail::JTypeFor<facebook::jni::JArrayClass<facebook::jni::detail::JTypeFor<executorch_jni::JEValue, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::detail::JTypeArray, void>::_javaobject*, facebook::jni::basic_strong_ref<executorch_jni::JEValue, facebook::jni::LocalReferenceAllocator> ()(facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::HybridClass<executorch_jni::ExecuTorchJni, facebook::jni::detail::BaseHybridClass>::JavaPart, facebook::jni::JObject, void>::_javaobject>, facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::JArrayClass<facebook::jni::detail::JTypeFor<executorch_jni::JEValue, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::detail::JTypeArray, void>::_javaobject*>&&))+64) (BuildId: 8065dc692f8e345f80fe49a1f2162d7e784b3499)
11-29 07:11:45.735: A/DEBUG(5455): #7 pc 0000000000b6a754 /data/app/~~inTvnzzPi2yrpfql0lFTPg==/com.android.contextq-0ebj2jXn0VMel1RNm-xTzw==/base.apk!libexecutorchdemo.so (facebook::jni::detail::MethodWrapper<facebook::jni::basic_strong_ref<executorch_jni::JEValue, facebook::jni::LocalReferenceAllocator> (executorch_jni::ExecuTorchJni::)(facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::JArrayClass<facebook::jni::detail::JTypeFor<executorch_jni::JEValue, facebook::jni::JObject, void>::_javaobject>, facebook::jni::detail::JTypeArray, void>::_javaobject*>), &(executorch_jni::ExecuTorchJni::forward(facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::JArrayClass<facebook::jni::detail::JTypeFor<executorch_jni::JEValue, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::detail::JTypeArray, void>::_javaobject*>)), executorch_jni::ExecuTorchJni, facebook::jni::basic_strong_ref<executorch_jni::JEValue, facebook::jni::LocalReferenceAllocator>, facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::JArrayClass<facebook::jni::detail::JTypeFor<executorch_jni::JEValue, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::detail::JTypeArray, void>::_javaobject*> >::call(_JNIEnv*, _jobject*, facebook::jni::detail::JTypeFor<facebook::jni::JArrayClass<facebook::jni::detail::JTypeFor<executorch_jni::JEValue, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::detail::JTypeArray, void>::_javaobject*)+44) (BuildId: 8065dc692f8e345f80fe49a1f2162d7e784b3499)
11-29 07:11:45.735: A/DEBUG(5455): #8 pc 000000000034aa30 /apex/com.android.art/lib64/libart.so (art_quick_generic_jni_trampoline+144) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #9 pc 0000000000333fa4 /apex/com.android.art/lib64/libart.so (art_quick_invoke_stub+612) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #10 pc 0000000000511078 /apex/com.android.art/lib64/libart.so (bool art::interpreter::DoCall(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, bool, art::JValue*)+1976) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #11 pc 000000000049efac /apex/com.android.art/lib64/libart.so (void art::interpreter::ExecuteSwitchImplCpp(art::interpreter::SwitchImplContext*)+4716) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #12 pc 000000000034d1d8 /apex/com.android.art/lib64/libart.so (ExecuteSwitchImplAsm+8) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #13 pc 0000000000a29dfc /data/app/~~inTvnzzPi2yrpfql0lFTPg==/com.android.contextq-0ebj2jXn0VMel1RNm-xTzw==/oat/arm64/base.vdex (com.example.executorchdemo.executor.Module.forward+0)
11-29 07:11:45.735: A/DEBUG(5455): #14 pc 0000000000378bb0 /apex/com.android.art/lib64/libart.so (art::interpreter::Execute(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame&, art::JValue, bool, bool) (.__uniq.112435418011751916792819755956732575238.llvm.11907307138045539842)+232) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #15 pc 0000000000511d44 /apex/com.android.art/lib64/libart.so (bool art::interpreter::DoCall(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, bool, art::JValue*)+5252) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #16 pc 000000000049e100 /apex/com.android.art/lib64/libart.so (void art::interpreter::ExecuteSwitchImplCpp(art::interpreter::SwitchImplContext*)+960) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #17 pc 000000000034d1d8 /apex/com.android.art/lib64/libart.so (ExecuteSwitchImplAsm+8) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #18 pc 0000000000d26e5c /data/app/~~inTvnzzPi2yrpfql0lFTPg==/com.android.contextq-0ebj2jXn0VMel1RNm-xTzw==/oat/arm64/base.vdex (com.android.contextq.neuralnetwork.NeuralNetworkService.neuralNetworkloadAndRunPytorch+0)
11-29 07:11:45.735: A/DEBUG(5455): #19 pc 0000000000378bb0 /apex/com.android.art/lib64/libart.so (art::interpreter::Execute(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame&, art::JValue, bool, bool) (.__uniq.112435418011751916792819755956732575238.llvm.11907307138045539842)+232) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #20 pc 0000000000511d44 /apex/com.android.art/lib64/libart.so (bool art::interpreter::DoCall(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, bool, art::JValue*)+5252) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #21 pc 000000000049e470 /apex/com.android.art/lib64/libart.so (void art::interpreter::ExecuteSwitchImplCpp(art::interpreter::SwitchImplContext*)+1840) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #22 pc 000000000034d1d8 /apex/com.android.art/lib64/libart.so (ExecuteSwitchImplAsm+8) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #23 pc 0000000000d216b0 /data/app/~~inTvnzzPi2yrpfql0lFTPg==/com.android.contextq-0ebj2jXn0VMel1RNm-xTzw==/oat/arm64/base.vdex (com.android.contextq.neuralnetwork.NeuralNetworkService$NeuralNetworkServiceRunnable.run+0)
11-29 07:11:45.735: A/DEBUG(5455): #24 pc 0000000000378bb0 /apex/com.android.art/lib64/libart.so (art::interpreter::Execute(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame&, art::JValue, bool, bool) (.__uniq.112435418011751916792819755956732575238.llvm.11907307138045539842)+232) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #25 pc 0000000000511d44 /apex/com.android.art/lib64/libart.so (bool art::interpreter::DoCall(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, bool, art::JValue*)+5252) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #26 pc 000000000049efac /apex/com.android.art/lib64/libart.so (void art::interpreter::ExecuteSwitchImplCpp(art::interpreter::SwitchImplContext*)+4716) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #27 pc 000000000034d1d8 /apex/com.android.art/lib64/libart.so (ExecuteSwitchImplAsm+8) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #28 pc 000000000010eaf0 /apex/com.android.art/javalib/core-oj.jar (java.lang.Thread.run+0)
11-29 07:11:45.735: A/DEBUG(5455): #29 pc 0000000000378bb0 /apex/com.android.art/lib64/libart.so (art::interpreter::Execute(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame&, art::JValue, bool, bool) (.__uniq.112435418011751916792819755956732575238.llvm.11907307138045539842)+232) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #30 pc 00000000003784a8 /apex/com.android.art/lib64/libart.so (artQuickToInterpreterBridge+964) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #31 pc 000000000034ab68 /apex/com.android.art/lib64/libart.so (art_quick_to_interpreter_bridge+88) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #32 pc 0000000000333fa4 /apex/com.android.art/lib64/libart.so (art_quick_invoke_stub+612) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #33 pc 000000000023e4d4 /apex/com.android.art/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+144) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #34 pc 0000000000539a3c /apex/com.android.art/lib64/libart.so (art::Thread::CreateCallback(void*)+1600) (BuildId: 4cfdaa9e5146c43e20ae36ee1caf9b7f)
11-29 07:11:45.735: A/DEBUG(5455): #35 pc 00000000000b6a24 /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+264) (BuildId: 369edc656806aeaf384cbeb8f7a347af)
11-29 07:11:45.735: A/DEBUG(5455): #36 pc 00000000000532bc /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+68) (BuildId: 369edc656806aeaf384cbeb8f7a347af)
11-29 07:11:45.771: E/tombstoned(686): Tombstone written to: tombstone_27

@adonnini
Copy link
Author

Hi,
I hope I am not being too much of a nuisance. I would like to be able to move forward with my work.
When, even estimated time, will someone be able to take a look at this latest problem I have run into?
Thanks

@Jack-Khuu
Copy link
Contributor

@kirklandsign Ideas?

@JacobSzwejbka
Copy link
Contributor

JacobSzwejbka commented Nov 29, 2023

Tensor rank is immutable in export and ExecuTorch. Im guessing the model was captured with an input that had a batch dimension and you are passing it one that lacks that. 'ETensor rank is immutable old: 3 new: 2' You could try just running unsqueeze on your input before passing it along.

@adonnini
Copy link
Author

adonnini commented Nov 30, 2023

@JacobSzwejbka I am puzzled that when I use pytorch mobile runtime processing the model with torchscript this problem does not happen and I am able to run inference using the model in my Android app without any problems.
It sounds like AOT processing with executorch does something that causes this problem.
How does torchscript processing differ from AOT processing with executorch? Could xnnpack processing have something to do with this problem? Does xnnpack processing need to happen in order to lower the model to the device?
Following your suggestion with my model might not be an option.
Thanks

@adonnini
Copy link
Author

adonnini commented Nov 30, 2023

@JacobSzwejbka to clarify, when you say
"You could try just running unsqueeze on your input before passing it along"
you mean before passing it along to executorch, right? If that is what you mean, given that I do not do that when I run the model in training/testing/validation mode, wouldn't that break the model unless I made other changes to it?
I am sorry. Perhaps I did not quite understand your suggestion.
Thanks

@adonnini
Copy link
Author

adonnini commented Dec 1, 2023

@JacobSzwejbka sorry to bug you again. Could you please answer my questions above? I am stuck until I understand better what you meant by your suggestions.
While waiting for your response, I did some work on it. Simply unsqueezing the input as you suggest breaks the model. Perhaps, I misunderstood your suggestion. This is why I asked the questions above.
I was also thinking that if such changes are required when using executorch in production it may not be practical to use it in a production environment.
Again, sorry to bother you. Thanks

@adonnini
Copy link
Author

adonnini commented Dec 2, 2023

@JacobSzwejbka could you please let me know if you will respond to my comments above? If you won't, please let me know. I will not bother you any longer. Thanks for your time and your patience

@adonnini
Copy link
Author

adonnini commented Dec 3, 2023

@Jack-Khuu I checked the code for the model again. All inputs have rank 3. I don't think unsqueezing inputs like @JacobSzwejbka suggested is necessary and breaks the model.
It is not clear to me which tensor the, ETensor rank is immutable old: 3 new: 2 error refers to. Could you please clarify?
Could someone take a closer look and let me know what I might try to resolve this problem?
Sorry to bug you both again about this but until this problem is resolved, I cannot move forward with my work.
Thanks

@adonnini
Copy link
Author

adonnini commented Dec 5, 2023

@JacobSzwejbka what did you mean precisely when you stated
"Im guessing the model was captured with an input that had a batch dimension and you are passing it one that lacks that"
Are you referring to the input dataset I am passing to the model in the Android application?
It's the same input dataset I pass to the model when I used pytorch mobile which worked.
Thanks

@adonnini adonnini closed this as completed Dec 5, 2023
@adonnini
Copy link
Author

adonnini commented Dec 5, 2023

None

@ali-khosh
Copy link

Hey @adonnini I'm the ExecuTorch product manager with the PyTorch team. Thanks for your interest in trying and contributing to ExecuTorch! I noticed you've been filing issues and interacting with the team and was wondering if you're interested in having an informal conversation so we learn more about your use case, wish list and existing pain points. Please feel free to email me at [email protected]. Thanks. Ali.

@Edward-YS
Copy link

None

Hello! I have used my own best.torchscript on android but it crashed. Do you know how to make it work?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants