-
Notifications
You must be signed in to change notification settings - Fork 539
Generating ETDump fails when using XNNPACK delegation #8177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
cc @mcr229 can you help with this? |
hmm this seems like an issue with generating ETDump? I'm actually not sure how that changes with to_edge vs to_edge_transform_and_lower. @Olivia-liu @tarun292 do you know how ET Dump interacts with these different API surfaces? |
@sfsouthpalatinate can you share what's the failure stacktrace you're seeing. I don't see the stack trace in the issue. |
@tarun292 The problem is that the python script runs without any error. Subsequently, when I want to run the bp file, I get the output "Terminated"; no stacktrace. That made it challenging for me to debug. Further, I was a bit confused since for the XNNPack delegation API. There are two ways of calling it according to the examples:
For generating the ETDump, both didn't work. |
@spalatinate you should use |
@tarun292 Sorry, for my delayed answer. I have built the runner in debug mode (set DCMAKE_BUILD_TYPE=Debug in build_example_runner.sh. After running the bp file again, I got "Terminated" and no stacktrace. I attached the Python script to generate the bp. I thought it is easier for inspection to post the code used to generate the bp file. Without XNNPACK delegation, the ETDump generation works just fine.
|
🐛 Describe the bug
When reproducing the ETDump generation example the executor to run Bundled Program file outputs aborted when I try to execute the bp file. The bp files was generated as follows:
In ETDump generatione example I simply replaced the
to_edge()
function with the API for the XNNPACK backendto_edge_transform_and_lower()
. See below:with
The output is then:
With 'to_edge()' everything works just fine. Can anyone point me in the right direction? Thanks!
Versions
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (aarch64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: 14.0.6
CMake version: version 3.31.2
Libc version: glibc-2.36
Python version: 3.10.0 (default, Mar 3 2022, 09:51:40) [GCC 10.2.0] (64-bit runtime)
Python platform: Linux-6.6.62+rpt-rpi-v8-aarch64-with-glibc2.36
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: ARM
Model name: Cortex-A76
Model: 1
Thread(s) per core: 1
Core(s) per cluster: 4
Socket(s): -
Cluster(s): 1
Stepping: r4p1
CPU(s) scaling MHz: 100%
CPU max MHz: 2400,0000
CPU min MHz: 1500,0000
BogoMIPS: 108,00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp
L1d cache: 256 KiB (4 instances)
L1i cache: 256 KiB (4 instances)
L2 cache: 2 MiB (4 instances)
L3 cache: 2 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] executorch==0.4.0a0+6a085ff
[pip3] numpy==1.26.4
[pip3] torch==2.5.0
[pip3] torchao==0.5.0+git0916b5b2
[pip3] torchaudio==2.5.0
[pip3] torchsr==1.0.4
[pip3] torchvision==0.20.0
[conda] executorch 0.4.0a0+6a085ff pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.5.0 pypi_0 pypi
[conda] torchao 0.5.0+git0916b5b2 pypi_0 pypi
[conda] torchaudio 2.5.0 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchvision 0.20.0 pypi_0 pypi
cc @digantdesai @mcr229 @mergennachin @byjlw
The text was updated successfully, but these errors were encountered: