Skip to content

Failure to set up Executorch #8869

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
adonnini opened this issue Mar 1, 2025 · 12 comments
Open

Failure to set up Executorch #8869

adonnini opened this issue Mar 1, 2025 · 12 comments
Assignees
Labels
module: user experience Issues related to reducing friction for users need-user-input The issue needs more information from the reporter before moving forward

Comments

@adonnini
Copy link

adonnini commented Mar 1, 2025

Hi,
I have set up Exeuctorch dozens of times. This is the first time that I failed to set it up following the instructions here:
https://pytorch.org/executorch/stable/getting-started-setup.html

./install_requirements.sh failed producing the error log you will find below.

  Caused by:
      0: Failed to stat `/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/buck-out/v2`
      1: ENOENT: No such file or directory

(I checked the contents of buck-out/V2 There is no forkserver folder and copying it from another working copy of Executorch does not work.)

Below you will also find the output of collect_env.py

Please let me know what I did wrong, what I should do next, and if you need additional information.

Thanks

ERROR LOG


  -- Using python executable '/home/adonnini1/anaconda3/envs/executorch/bin/python'
  -- Resolved buck2 as /home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/buck2-bin/buck2-3bbde7daa94987db468d021ad625bc93dc62ba7fcb16945cb09b64aab077f284.
  -- Killing buck2 daemon
  '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/buck2-bin/buck2-3bbde7daa94987db468d021ad625bc93dc62ba7fcb16945cb09b64aab077f284 killall'
  -- executorch: Generating source lists
  -- executorch: Generating source file list /home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/pip-out/temp.linux-x86_64-cpython-310/cmake-out/executorch_srcs.cmake
  Error while generating /home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/pip-out/temp.linux-x86_64-cpython-310/cmake-out/executorch_srcs.cmake. Exit code: 1
  Output:

  Error:
  Traceback (most recent call last):
    File "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/build/buck_util.py", line 26, in run
      cp: subprocess.CompletedProcess = subprocess.run(
    File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/subprocess.py", line 524, in run
      raise CalledProcessError(retcode, process.args,
  subprocess.CalledProcessError: Command '['/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/buck2-bin/buck2-3bbde7daa94987db468d021ad625bc93dc62ba7fcb16945cb09b64aab077f284', 'cquery', "inputs(deps('//runtime/executor:program'))"]' returned non-zero exit status 2.

  The above exception was the direct cause of the following exception:

  Traceback (most recent call last):
    File "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/build/extract_sources.py", line 232, in <module>
      main()
    File "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/build/extract_sources.py", line 217, in main
      target_to_srcs[name] = sorted(target.get_sources(graph, runner, buck_args))
    File "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/build/extract_sources.py", line 121, in get_sources
      sources: set[str] = set(runner.run(["cquery", query] + buck_args))
    File "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/build/buck_util.py", line 31, in run
      raise RuntimeError(ex.stderr.decode("utf-8")) from ex
  RuntimeError: Command failed:
  Error validating working directory

  Caused by:
      0: Failed to stat `/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/buck-out/v2`
      1: ENOENT: No such file or directory


  CMake Error at build/Utils.cmake:216 (message):
    executorch: source list generation failed
  Call Stack (most recent call first):
    CMakeLists.txt:387 (extract_sources)


  -- Configuring incomplete, errors occurred!
  error: command '/home/adonnini1/anaconda3/envs/executorch/bin/cmake' failed with exit code 1
  error: subprocess-exited-with-error
  
  × Building wheel for executorch (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> See above for output.
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  full command: /home/adonnini1/anaconda3/envs/executorch/bin/python /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_wheel /tmp/tmpa3t_q2sj
  cwd: /home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch
  Building wheel for executorch (pyproject.toml) ... error
  ERROR: Failed building wheel for executorch
Failed to build executorch
ERROR: Failed to build installable wheels for some pyproject.toml based projects (executorch)
Traceback (most recent call last):
  File "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/./install_requirements.py", line 198, in <module>
    subprocess.run(
  File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/subprocess.py", line 524, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/home/adonnini1/anaconda3/envs/executorch/bin/python', '-m', 'pip', 'install', '.', '--no-build-isolation', '-v', '--extra-index-url', 'https://download.pytorch.org/whl/test/cpu']' returned non-zero exit status 1.

ENVIRONMENT

PyTorch version: 2.6.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.36

Python version: 3.10.0 (default, Mar  3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-6.1.0-31-amd64-x86_64-with-glibc2.36
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        46 bits physical, 48 bits virtual
Byte Order:                           Little Endian
CPU(s):                               32
On-line CPU(s) list:                  0-31
Vendor ID:                            GenuineIntel
Model name:                           13th Gen Intel(R) Core(TM) i9-13950HX
CPU family:                           6
Model:                                183
Thread(s) per core:                   2
Core(s) per socket:                   24
Socket(s):                            1
Stepping:                             1
CPU(s) scaling MHz:                   22%
CPU max MHz:                          5500.0000
CPU min MHz:                          800.0000
BogoMIPS:                             4838.40
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization:                       VT-x
L1d cache:                            896 KiB (24 instances)
L1i cache:                            1.3 MiB (24 instances)
L2 cache:                             32 MiB (12 instances)
L3 cache:                             36 MiB (1 instance)
NUMA node(s):                         1
NUMA node0 CPU(s):                    0-31
Vulnerability Gather data sampling:   Not affected
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Not affected
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] pytorch-forecasting==1.0.0
[pip3] pytorch-lightning==2.2.1
[pip3] pytorch_optimizer==2.12.0
[pip3] torch==2.6.0+cpu
[pip3] torchao==0.8.0+gitebc43034
[pip3] torchaudio==2.6.0+cpu
[pip3] torchmetrics==1.3.2
[pip3] torchsr==1.0.4
[pip3] torchvision==0.21.0+cpu
[conda] numpy                     2.2.3                    pypi_0    pypi
[conda] torch                     2.6.0+cpu                pypi_0    pypi
[conda] torchao                   0.8.0+gitebc43034          pypi_0    pypi
[conda] torchaudio                2.6.0+cpu                pypi_0    pypi
[conda] torchsr                   1.0.4                    pypi_0    pypi
[conda] torchvision               0.21.0+cpu               pypi_0    pypi

cc @mergennachin @byjlw

@guangy10
Copy link
Contributor

guangy10 commented Mar 1, 2025

Have you tried “./install_executorch.sh —clean” and then re-run the script? Let us know how it works.

@guangy10 guangy10 added the need-user-input The issue needs more information from the reporter before moving forward label Mar 1, 2025
@adonnini
Copy link
Author

adonnini commented Mar 1, 2025

@guangy10
Thanks for getting back to me.

There is no

install_executorch.sh
in /executorch or any of its sub-folders.

I did find it in an executorch installation dating back to May 2024. Copying to the current executorch directory and running it (obviously) does ot work/

The location I found it in

executorch050924/third-party/pytorch/.ci/docker/common/install_executorch.sh

does not exist in the current executorch folder (there is no pytorch subfolder in the third-party sub-folder.

What should I do next?

Thanks

@guangy10
Copy link
Contributor

guangy10 commented Mar 1, 2025

@adonnini
Copy link
Author

adonnini commented Mar 2, 2025

Thanks.

Running from /executorch does not work. It fails with this error:

install_executorch.sh: line 10: ./run_python_script.sh: No such file or directory

Did I run it from the wrong location?

Why is it that install_executorch.sh is not in the executorch installation?

BTW, I completely removed executorch from my system a number of times and resinstallled it. The result was always a failure

      0: Failed to stat `/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/buck-out/v2`
      1: ENOENT: No such file or directory```

Since I recently set up Excutorch for another model in another location on my system with no problem just last week following the same instructions what may have changed since then to cause this failure?

Thanks

@guangy10
Copy link
Contributor

guangy10 commented Mar 2, 2025

Thanks.

Running from /executorch does not work. It fails with this error:

install_executorch.sh: line 10: ./run_python_script.sh: No such file or directory

Did I run it from the wrong location?

Why is it that install_executorch.sh is not in the executorch installation?

BTW, I completely removed executorch from my system a number of times and resinstallled it. The result was always a failure

      0: Failed to stat `/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/buck-out/v2`
      1: ENOENT: No such file or directory```

Since I recently set up Excutorch for another model in another location on my system with no problem just last week following the same instructions what may have changed since then to cause this failure?

Thanks

Strage. run_python_script.sh should be in the executorch root as well. Here it is: https://github.com/pytorch/executorch/blob/main/run_python_script.sh

Can you try a few things?

  1. Make sure you are on latest main. Please share the commit hash where you hit the problem. And please show what files are listed in the root directory.
  2. manually remove pip-out, buck-out, cmake-out directories from execution root.
  3. Create a new conda evn following the setup guide, and then run the install script.

@guangy10 guangy10 added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module module: user experience Issues related to reducing friction for users labels Mar 2, 2025
@github-project-automation github-project-automation bot moved this to To triage in ExecuTorch DevX Mar 2, 2025
@adonnini
Copy link
Author

adonnini commented Mar 2, 2025

@guangy10 ,

I would be glad to help out resolve this issue. It prevents me from making progress with my project. Unfortunately, where I am it's nightiime. I will not be able to work on this until tomorrow early afternoon as I have a work commitment in the morning.

BTW, will placing run_python_script.sh in the executorch root resolve the issue with install_executorch.sh?

Thanks

@adonnini
Copy link
Author

adonnini commented Mar 3, 2025

@guangy10 I completely removed executorch from the folder I had installed it in. Then I set it up again following the instructions in
https://pytorch.org/executorch/stable/getting-started-setup.html

Below you will find the complete execution log including the error log.

I hope this helps. Please let me know what I should do next.

Thanks

EXECUTORCH SET-UP EXECUTION LOG

(executorch) adonnini1@actlnxlpt8:~/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ$ conda create -yn executorch python=3.10.0
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: done

## Package Plan ##

  environment location: /home/adonnini1/anaconda3/envs/executorch

  added / updated specs:
    - python=3.10.0


The following NEW packages will be INSTALLED:

  _libgcc_mutex      pkgs/main/linux-64::_libgcc_mutex-0.1-main
  _openmp_mutex      pkgs/main/linux-64::_openmp_mutex-5.1-1_gnu
  bzip2              pkgs/main/linux-64::bzip2-1.0.8-h5eee18b_6
  ca-certificates    pkgs/main/linux-64::ca-certificates-2025.2.25-h06a4308_0
  ld_impl_linux-64   pkgs/main/linux-64::ld_impl_linux-64-2.40-h12ee557_0
  libffi             pkgs/main/linux-64::libffi-3.3-he6710b0_2
  libgcc-ng          pkgs/main/linux-64::libgcc-ng-11.2.0-h1234567_1
  libgomp            pkgs/main/linux-64::libgomp-11.2.0-h1234567_1
  libstdcxx-ng       pkgs/main/linux-64::libstdcxx-ng-11.2.0-h1234567_1
  libuuid            pkgs/main/linux-64::libuuid-1.41.5-h5eee18b_0
  ncurses            pkgs/main/linux-64::ncurses-6.4-h6a678d5_0
  openssl            pkgs/main/linux-64::openssl-1.1.1w-h7f8727e_0
  pip                pkgs/main/linux-64::pip-25.0-py310h06a4308_0
  python             pkgs/main/linux-64::python-3.10.0-h12debd9_5
  readline           pkgs/main/linux-64::readline-8.2-h5eee18b_0
  setuptools         pkgs/main/linux-64::setuptools-75.8.0-py310h06a4308_0
  sqlite             pkgs/main/linux-64::sqlite-3.45.3-h5eee18b_0
  tk                 pkgs/main/linux-64::tk-8.6.14-h39e8969_0
  tzdata             pkgs/main/noarch::tzdata-2025a-h04d1e81_0
  wheel              pkgs/main/linux-64::wheel-0.45.1-py310h06a4308_0
  xz                 pkgs/main/linux-64::xz-5.6.4-h5eee18b_1
  zlib               pkgs/main/linux-64::zlib-1.2.13-h5eee18b_1


Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
#     $ conda activate executorch
#
# To deactivate an active environment, use
#
#     $ conda deactivate

Retrieving notices: ...working... done
(executorch) adonnini1@actlnxlpt8:~/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ$ conda activate executorch
(executorch) adonnini1@actlnxlpt8:~/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ$ git clone --branch release/0.5 https://github.com/pytorch/executorch.git
Cloning into 'executorch'...
remote: Enumerating objects: 181225, done.
remote: Counting objects: 100% (1265/1265), done.
remote: Compressing objects: 100% (624/624), done.
remote: Total 181225 (delta 1003), reused 649 (delta 641), pack-reused 179960 (from 3)
Receiving objects: 100% (181225/181225), 169.30 MiB | 41.85 MiB/s, done.
Resolving deltas: 100% (143254/143254), done.
(executorch) adonnini1@actlnxlpt8:~/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ$ cd executorch
(executorch) adonnini1@actlnxlpt8:~/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch$ git submodule sync
(executorch) adonnini1@actlnxlpt8:~/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch$ git submodule update --init
Submodule 'backends/arm/third-party/ethos-u-core-driver' (https://github.com/pytorch-labs/ethos-u-core-driver-mirror) registered for path 'backends/arm/third-party/ethos-u-core-driver'
Submodule 'backends/arm/third-party/serialization_lib' (https://github.com/pytorch-labs/tosa_serialization_lib-mirror) registered for path 'backends/arm/third-party/serialization_lib'
Submodule 'backends/cadence/fusion_g3/third-party/nnlib/nnlib-FusionG3' (https://github.com/foss-xtensa/nnlib-FusionG3/) registered for path 'backends/cadence/fusion_g3/third-party/nnlib/nnlib-FusionG3'
Submodule 'backends/cadence/hifi/third-party/nnlib/nnlib-hifi4' (https://github.com/foss-xtensa/nnlib-hifi4.git) registered for path 'backends/cadence/hifi/third-party/nnlib/nnlib-hifi4'
Submodule 'backends/vulkan/third-party/Vulkan-Headers' (https://github.com/KhronosGroup/Vulkan-Headers) registered for path 'backends/vulkan/third-party/Vulkan-Headers'
Submodule 'backends/vulkan/third-party/VulkanMemoryAllocator' (https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator.git) registered for path 'backends/vulkan/third-party/VulkanMemoryAllocator'
Submodule 'backends/vulkan/third-party/volk' (https://github.com/zeux/volk) registered for path 'backends/vulkan/third-party/volk'
Submodule 'backends/xnnpack/third-party/FP16' (https://github.com/Maratyszcza/FP16.git) registered for path 'backends/xnnpack/third-party/FP16'
Submodule 'backends/xnnpack/third-party/FXdiv' (https://github.com/Maratyszcza/FXdiv.git) registered for path 'backends/xnnpack/third-party/FXdiv'
Submodule 'backends/xnnpack/third-party/XNNPACK' (https://github.com/google/XNNPACK.git) registered for path 'backends/xnnpack/third-party/XNNPACK'
Submodule 'backends/xnnpack/third-party/cpuinfo' (https://github.com/pytorch/cpuinfo.git) registered for path 'backends/xnnpack/third-party/cpuinfo'
Submodule 'backends/xnnpack/third-party/pthreadpool' (https://github.com/Maratyszcza/pthreadpool.git) registered for path 'backends/xnnpack/third-party/pthreadpool'
Submodule 'extension/llm/third-party/abseil-cpp' (https://github.com/abseil/abseil-cpp.git) registered for path 'extension/llm/third-party/abseil-cpp'
Submodule 'extension/llm/third-party/re2' (https://github.com/google/re2.git) registered for path 'extension/llm/third-party/re2'
Submodule 'extension/llm/third-party/sentencepiece' (https://github.com/google/sentencepiece.git) registered for path 'extension/llm/third-party/sentencepiece'
Submodule 'kernels/optimized/third-party/eigen' (https://gitlab.com/libeigen/eigen.git) registered for path 'kernels/optimized/third-party/eigen'
Submodule 'third-party/ao' (https://github.com/pytorch/ao.git) registered for path 'third-party/ao'
Submodule 'third-party/flatbuffers' (https://github.com/google/flatbuffers.git) registered for path 'third-party/flatbuffers'
Submodule 'third-party/flatcc' (https://github.com/dvidelabs/flatcc.git) registered for path 'third-party/flatcc'
Submodule 'third-party/gflags' (https://github.com/gflags/gflags.git) registered for path 'third-party/gflags'
Submodule 'third-party/googletest' (https://github.com/google/googletest.git) registered for path 'third-party/googletest'
Submodule 'third-party/ios-cmake' (https://github.com/leetal/ios-cmake) registered for path 'third-party/ios-cmake'
Submodule 'third-party/prelude' (https://github.com/facebook/buck2-prelude.git) registered for path 'third-party/prelude'
Submodule 'third-party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third-party/pybind11'
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/backends/arm/third-party/ethos-u-core-driver'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/backends/arm/third-party/serialization_lib'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/backends/cadence/fusion_g3/third-party/nnlib/nnlib-FusionG3'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/backends/cadence/hifi/third-party/nnlib/nnlib-hifi4'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/backends/vulkan/third-party/Vulkan-Headers'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/backends/vulkan/third-party/VulkanMemoryAllocator'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/backends/vulkan/third-party/volk'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/backends/xnnpack/third-party/FP16'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/backends/xnnpack/third-party/FXdiv'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/backends/xnnpack/third-party/XNNPACK'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/backends/xnnpack/third-party/cpuinfo'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/backends/xnnpack/third-party/pthreadpool'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/extension/llm/third-party/abseil-cpp'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/extension/llm/third-party/re2'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/extension/llm/third-party/sentencepiece'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/kernels/optimized/third-party/eigen'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/third-party/ao'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/third-party/flatbuffers'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/third-party/flatcc'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/third-party/gflags'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/third-party/googletest'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/third-party/ios-cmake'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/third-party/prelude'...
Cloning into '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/third-party/pybind11'...
Submodule path 'backends/arm/third-party/ethos-u-core-driver': checked out '78df0006c5fa667150d3ee35db7bde1d3f6f58c7'
Submodule path 'backends/arm/third-party/serialization_lib': checked out '187af0d41fe75d08d2a7ec84c1b4d24b9b641ed2'
remote: Enumerating objects: 74, done.
remote: Counting objects: 100% (74/74), done.
remote: Compressing objects: 100% (32/32), done.
remote: Total 74 (delta 35), reused 74 (delta 35), pack-reused 0 (from 0)
Unpacking objects: 100% (74/74), 3.29 MiB | 14.23 MiB/s, done.
From https://github.com/foss-xtensa/nnlib-FusionG3
 * branch            8ddd1c39d4b20235ebe9dac68d92848da2885ece -> FETCH_HEAD
Submodule path 'backends/cadence/fusion_g3/third-party/nnlib/nnlib-FusionG3': checked out '8ddd1c39d4b20235ebe9dac68d92848da2885ece'
Submodule path 'backends/cadence/hifi/third-party/nnlib/nnlib-hifi4': checked out '102944a6f76a0de4d81adc431f3f132f517aa87f'
Submodule path 'backends/vulkan/third-party/Vulkan-Headers': checked out '0c5928795a66e93f65e5e68a36d8daa79a209dc2'
Submodule path 'backends/vulkan/third-party/VulkanMemoryAllocator': checked out 'a6bfc237255a6bac1513f7c1ebde6d8aed6b5191'
Submodule path 'backends/vulkan/third-party/volk': checked out 'b3bc21e584f97400b6884cb2a541a56c6a5ddba3'
Submodule path 'backends/xnnpack/third-party/FP16': checked out '4dfe081cf6bcd15db339cf2680b9281b8451eeb3'
Submodule path 'backends/xnnpack/third-party/FXdiv': checked out 'b408327ac2a15ec3e43352421954f5b1967701d1'
Submodule path 'backends/xnnpack/third-party/XNNPACK': checked out '4ea82e595b36106653175dcb04b2aa532660d0d8'
Submodule path 'backends/xnnpack/third-party/cpuinfo': checked out '1e83a2fdd3102f65c6f1fb602c1b320486218a99'
Submodule path 'backends/xnnpack/third-party/pthreadpool': checked out '4fe0e1e183925bf8cfa6aae24237e724a96479b8'
Submodule path 'extension/llm/third-party/abseil-cpp': checked out 'eb852207758a773965301d0ae717e4235fc5301a'
Submodule path 'extension/llm/third-party/re2': checked out '6dcd83d60f7944926bfd308cc13979fc53dd69ca'
Submodule path 'extension/llm/third-party/sentencepiece': checked out '6225e08edb2577757163b3f5dbba4c0b670ef445'
Submodule path 'kernels/optimized/third-party/eigen': checked out 'a39ade4ccf99df845ec85c580fbbb324f71952fa'
Submodule path 'third-party/ao': checked out 'ebc43034e665bcda759cf9ef9c2c207057c5eeb1'
Submodule path 'third-party/flatbuffers': checked out '595bf0007ab1929570c7671f091313c8fc20644e'
Submodule path 'third-party/flatcc': checked out '896db54787e8b730a6be482c69324751f3f5f117'
Submodule path 'third-party/gflags': checked out 'a738fdf9338412f83ab3f26f31ac11ed3f3ec4bd'
Submodule path 'third-party/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929'
Submodule path 'third-party/ios-cmake': checked out '06465b27698424cf4a04a5ca4904d50a3c966c45'
Submodule path 'third-party/prelude': checked out '4e9e6d50b8b461564a7e351ff60b87fe59d7e53b'
Submodule path 'third-party/pybind11': checked out 'a2e59f0e7065404b44dfe92a28aca47ba1378dc4'

(executorch) adonnini1@actlnxlpt8:~/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch$ ./install_requirements.sh
Collecting packaging
  Using cached packaging-24.2-py3-none-any.whl.metadata (3.2 kB)
Using cached packaging-24.2-py3-none-any.whl (65 kB)
Installing collected packages: packaging
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lightning 2.2.1 requires fsspec[http]<2025.0,>=2022.5.0, which is not installed.
lightning 2.2.1 requires numpy<3.0,>=1.17.2, which is not installed.
lightning 2.2.1 requires PyYAML<8.0,>=5.4, which is not installed.
lightning 2.2.1 requires torch<4.0,>=1.13.0, which is not installed.
lightning 2.2.1 requires tqdm<6.0,>=4.57.0, which is not installed.
lightning 2.2.1 requires typing-extensions<6.0,>=4.4.0, which is not installed.
lightning-utilities 0.11.2 requires typing-extensions, which is not installed.
optuna 3.6.1 requires numpy, which is not installed.
optuna 3.6.1 requires PyYAML, which is not installed.
optuna 3.6.1 requires tqdm, which is not installed.
pytorch-forecasting 1.0.0 requires matplotlib, which is not installed.
pytorch-forecasting 1.0.0 requires pandas<=3.0.0,>=1.3.0, which is not installed.
pytorch-forecasting 1.0.0 requires scikit-learn<2.0,>=1.2, which is not installed.
pytorch-forecasting 1.0.0 requires scipy<2.0,>=1.8, which is not installed.
pytorch-forecasting 1.0.0 requires torch<3.0.0,>=2.0.0, which is not installed.
pytorch-lightning 2.2.1 requires fsspec[http]>=2022.5.0, which is not installed.
pytorch-lightning 2.2.1 requires numpy>=1.17.2, which is not installed.
pytorch-lightning 2.2.1 requires PyYAML>=5.4, which is not installed.
pytorch-lightning 2.2.1 requires torch>=1.13.0, which is not installed.
pytorch-lightning 2.2.1 requires tqdm>=4.57.0, which is not installed.
pytorch-lightning 2.2.1 requires typing-extensions>=4.4.0, which is not installed.
statsmodels 0.14.1 requires numpy<2,>=1.18, which is not installed.
statsmodels 0.14.1 requires pandas!=2.1.0,>=1.0, which is not installed.
statsmodels 0.14.1 requires scipy!=1.9.2,>=1.4, which is not installed.
torchmetrics 1.3.2 requires numpy>1.20.0, which is not installed.
torchmetrics 1.3.2 requires torch>=1.10.0, which is not installed.
Successfully installed packaging-24.2
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/test/cpu
Collecting torch==2.6.0
  Using cached https://download.pytorch.org/whl/test/cpu/torch-2.6.0%2Bcpu-cp310-cp310-linux_x86_64.whl.metadata (26 kB)
Collecting torchvision==0.21.0
  Using cached https://download.pytorch.org/whl/test/cpu/torchvision-0.21.0%2Bcpu-cp310-cp310-linux_x86_64.whl.metadata (6.1 kB)
Collecting typing-extensions
  Using cached https://download.pytorch.org/whl/test/typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Collecting cmake
  Using cached cmake-3.31.6-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.3 kB)
Requirement already satisfied: pip>=23 in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (25.0)
Collecting pyyaml
  Using cached https://download.pytorch.org/whl/test/PyYAML-6.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (751 kB)
Requirement already satisfied: setuptools>=63 in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (75.8.0)
Collecting tomli
  Using cached tomli-2.2.1-py3-none-any.whl.metadata (10 kB)
Requirement already satisfied: wheel in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (0.45.1)
Collecting zstd
  Downloading zstd-1.5.6.5-cp310-cp310-manylinux_2_34_x86_64.whl.metadata (20 kB)
Collecting timm==1.0.7
  Using cached timm-1.0.7-py3-none-any.whl.metadata (47 kB)
Collecting torchaudio==2.6.0
  Using cached https://download.pytorch.org/whl/test/cpu/torchaudio-2.6.0%2Bcpu-cp310-cp310-linux_x86_64.whl.metadata (6.6 kB)
Collecting torchsr==1.0.4
  Using cached torchsr-1.0.4-py3-none-any.whl.metadata (12 kB)
Collecting transformers==4.47.1
  Using cached transformers-4.47.1-py3-none-any.whl.metadata (44 kB)
Collecting filelock (from torch==2.6.0)
  Using cached filelock-3.17.0-py3-none-any.whl.metadata (2.9 kB)
Collecting networkx (from torch==2.6.0)
  Using cached networkx-3.4.2-py3-none-any.whl.metadata (6.3 kB)
Collecting jinja2 (from torch==2.6.0)
  Using cached jinja2-3.1.5-py3-none-any.whl.metadata (2.6 kB)
Collecting fsspec (from torch==2.6.0)
  Using cached fsspec-2025.2.0-py3-none-any.whl.metadata (11 kB)
Collecting sympy==1.13.1 (from torch==2.6.0)
  Using cached https://download.pytorch.org/whl/test/sympy-1.13.1-py3-none-any.whl (6.2 MB)
Collecting numpy (from torchvision==0.21.0)
  Using cached numpy-2.2.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (62 kB)
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision==0.21.0)
  Using cached pillow-11.1.0-cp310-cp310-manylinux_2_28_x86_64.whl.metadata (9.1 kB)
Collecting huggingface_hub (from timm==1.0.7)
  Using cached huggingface_hub-0.29.1-py3-none-any.whl.metadata (13 kB)
Collecting safetensors (from timm==1.0.7)
  Using cached safetensors-0.5.3-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.8 kB)
Requirement already satisfied: packaging>=20.0 in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (from transformers==4.47.1) (24.2)
Collecting regex!=2019.12.17 (from transformers==4.47.1)
  Using cached regex-2024.11.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (40 kB)
Collecting requests (from transformers==4.47.1)
  Using cached https://download.pytorch.org/whl/test/requests-2.32.3-py3-none-any.whl (64 kB)
Collecting tokenizers<0.22,>=0.21 (from transformers==4.47.1)
  Using cached tokenizers-0.21.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.7 kB)
Collecting tqdm>=4.27 (from transformers==4.47.1)
  Using cached tqdm-4.67.1-py3-none-any.whl.metadata (57 kB)
Collecting mpmath<1.4,>=1.1.0 (from sympy==1.13.1->torch==2.6.0)
  Using cached https://download.pytorch.org/whl/test/mpmath-1.3.0-py3-none-any.whl (536 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch==2.6.0)
  Using cached MarkupSafe-3.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.0 kB)
Collecting charset-normalizer<4,>=2 (from requests->transformers==4.47.1)
  Using cached charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (35 kB)
Collecting idna<4,>=2.5 (from requests->transformers==4.47.1)
  Using cached https://download.pytorch.org/whl/test/idna-3.10-py3-none-any.whl (70 kB)
Collecting urllib3<3,>=1.21.1 (from requests->transformers==4.47.1)
  Using cached urllib3-2.3.0-py3-none-any.whl.metadata (6.5 kB)
Collecting certifi>=2017.4.17 (from requests->transformers==4.47.1)
  Using cached certifi-2025.1.31-py3-none-any.whl.metadata (2.5 kB)
Using cached https://download.pytorch.org/whl/test/cpu/torch-2.6.0%2Bcpu-cp310-cp310-linux_x86_64.whl (178.6 MB)
Using cached https://download.pytorch.org/whl/test/cpu/torchvision-0.21.0%2Bcpu-cp310-cp310-linux_x86_64.whl (1.8 MB)
Using cached timm-1.0.7-py3-none-any.whl (2.3 MB)
Using cached https://download.pytorch.org/whl/test/cpu/torchaudio-2.6.0%2Bcpu-cp310-cp310-linux_x86_64.whl (1.7 MB)
Using cached torchsr-1.0.4-py3-none-any.whl (31 kB)
Using cached transformers-4.47.1-py3-none-any.whl (10.1 MB)
Using cached cmake-3.31.6-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (27.8 MB)
Using cached tomli-2.2.1-py3-none-any.whl (14 kB)
Downloading zstd-1.5.6.5-cp310-cp310-manylinux_2_34_x86_64.whl (1.7 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 10.1 MB/s eta 0:00:00
Using cached huggingface_hub-0.29.1-py3-none-any.whl (468 kB)
Using cached fsspec-2025.2.0-py3-none-any.whl (184 kB)
Using cached numpy-2.2.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.4 MB)
Using cached pillow-11.1.0-cp310-cp310-manylinux_2_28_x86_64.whl (4.5 MB)
Using cached regex-2024.11.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (781 kB)
Using cached safetensors-0.5.3-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (471 kB)
Using cached tokenizers-0.21.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.0 MB)
Using cached tqdm-4.67.1-py3-none-any.whl (78 kB)
Using cached filelock-3.17.0-py3-none-any.whl (16 kB)
Using cached jinja2-3.1.5-py3-none-any.whl (134 kB)
Using cached networkx-3.4.2-py3-none-any.whl (1.7 MB)
Using cached certifi-2025.1.31-py3-none-any.whl (166 kB)
Using cached charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (146 kB)
Using cached MarkupSafe-3.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (20 kB)
Using cached urllib3-2.3.0-py3-none-any.whl (128 kB)
Installing collected packages: zstd, mpmath, urllib3, typing-extensions, tqdm, tomli, sympy, safetensors, regex, pyyaml, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, cmake, charset-normalizer, certifi, requests, jinja2, torch, huggingface_hub, torchvision, torchaudio, tokenizers, transformers, torchsr, timm
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
anyio 4.3.0 requires exceptiongroup>=1.0.2; python_version < "3.11", which is not installed.
fastapi 0.110.0 requires pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4, which is not installed.
patsy 0.5.6 requires six, which is not installed.
pytorch-forecasting 1.0.0 requires matplotlib, which is not installed.
pytorch-forecasting 1.0.0 requires pandas<=3.0.0,>=1.3.0, which is not installed.
pytorch-forecasting 1.0.0 requires scikit-learn<2.0,>=1.2, which is not installed.
pytorch-forecasting 1.0.0 requires scipy<2.0,>=1.8, which is not installed.
statsmodels 0.14.1 requires pandas!=2.1.0,>=1.0, which is not installed.
statsmodels 0.14.1 requires scipy!=1.9.2,>=1.4, which is not installed.
lightning 2.2.1 requires fsspec[http]<2025.0,>=2022.5.0, but you have fsspec 2025.2.0 which is incompatible.
statsmodels 0.14.1 requires numpy<2,>=1.18, but you have numpy 2.2.3 which is incompatible.
Successfully installed MarkupSafe-3.0.2 certifi-2025.1.31 charset-normalizer-3.4.1 cmake-3.31.6 filelock-3.17.0 fsspec-2025.2.0 huggingface_hub-0.29.1 idna-3.10 jinja2-3.1.5 mpmath-1.3.0 networkx-3.4.2 numpy-2.2.3 pillow-11.1.0 pyyaml-6.0.2 regex-2024.11.6 requests-2.32.3 safetensors-0.5.3 sympy-1.13.1 timm-1.0.7 tokenizers-0.21.0 tomli-2.2.1 torch-2.6.0+cpu torchaudio-2.6.0+cpu torchsr-1.0.4 torchvision-0.21.0+cpu tqdm-4.67.1 transformers-4.47.1 typing-extensions-4.12.2 urllib3-2.3.0 zstd-1.5.6.5
Processing ./third-party/ao
  Preparing metadata (setup.py) ... done
Building wheels for collected packages: torchao
  Building wheel for torchao (setup.py) ... done
  Created wheel for torchao: filename=torchao-0.8.0+gitebc43034-py3-none-any.whl size=614723 sha256=73be0d671e76863cfd57123f23aab38f998c34c7c9c59fa4608292e9a066c909
  Stored in directory: /tmp/pip-ephem-wheel-cache-_rer98v3/wheels/7c/9c/33/a77635d9f7cb8afae55254e213b1b4bbfba86e5cd8ed6d6acf
Successfully built torchao
Installing collected packages: torchao
Successfully installed torchao-0.8.0+gitebc43034
Using pip 25.0 from /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/pip (python 3.10)
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/test/cpu
Processing /home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch
  Running command Preparing metadata (pyproject.toml)
  running dist_info
  creating /tmp/pip-modern-metadata-mw_zma3y/executorch.egg-info
  writing /tmp/pip-modern-metadata-mw_zma3y/executorch.egg-info/PKG-INFO
  writing dependency_links to /tmp/pip-modern-metadata-mw_zma3y/executorch.egg-info/dependency_links.txt
  writing entry points to /tmp/pip-modern-metadata-mw_zma3y/executorch.egg-info/entry_points.txt
  writing requirements to /tmp/pip-modern-metadata-mw_zma3y/executorch.egg-info/requires.txt
  writing top-level names to /tmp/pip-modern-metadata-mw_zma3y/executorch.egg-info/top_level.txt
  writing manifest file '/tmp/pip-modern-metadata-mw_zma3y/executorch.egg-info/SOURCES.txt'
  reading manifest file '/tmp/pip-modern-metadata-mw_zma3y/executorch.egg-info/SOURCES.txt'
  adding license file 'LICENSE'
  writing manifest file '/tmp/pip-modern-metadata-mw_zma3y/executorch.egg-info/SOURCES.txt'
  creating '/tmp/pip-modern-metadata-mw_zma3y/executorch-0.5.0a0+1bc0699.dist-info'
  Preparing metadata (pyproject.toml) ... done
Collecting expecttest (from executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for expecttest from https://files.pythonhosted.org/packages/27/fb/deeefea1ea549273817ca7bed3db2f39cc238a75a745a20e3651619f7335/expecttest-0.3.0-py3-none-any.whl.metadata
  Using cached expecttest-0.3.0-py3-none-any.whl.metadata (3.8 kB)
Collecting flatbuffers (from executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for flatbuffers from https://files.pythonhosted.org/packages/b8/25/155f9f080d5e4bc0082edfda032ea2bc2b8fab3f4d25d46c1e9dd22a1a89/flatbuffers-25.2.10-py2.py3-none-any.whl.metadata
  Using cached flatbuffers-25.2.10-py2.py3-none-any.whl.metadata (875 bytes)
Collecting hypothesis (from executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for hypothesis from https://files.pythonhosted.org/packages/3e/15/234573ed76ab2b065c562c72b25ade28ed9d46d0efd347a8599a384521a1/hypothesis-6.127.5-py3-none-any.whl.metadata
  Using cached hypothesis-6.127.5-py3-none-any.whl.metadata (4.4 kB)
Requirement already satisfied: mpmath==1.3.0 in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (from executorch==0.5.0a0+1bc0699) (1.3.0)
Collecting numpy==2.0.0 (from executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for numpy==2.0.0 from https://files.pythonhosted.org/packages/d6/a8/6a2419c40c7b6f7cb4ef52c532c88e55490c4fa92885964757d507adddce/numpy-2.0.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Using cached numpy-2.0.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (60 kB)
Requirement already satisfied: packaging in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (from executorch==0.5.0a0+1bc0699) (24.2)
Collecting pandas==2.2.2 (from executorch==0.5.0a0+1bc0699)
  Using cached https://download.pytorch.org/whl/test/pandas-2.2.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.0 MB)
Collecting parameterized (from executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for parameterized from https://files.pythonhosted.org/packages/00/2f/804f58f0b856ab3bf21617cccf5b39206e6c4c94c2cd227bde125ea6105f/parameterized-0.9.0-py2.py3-none-any.whl.metadata
  Using cached parameterized-0.9.0-py2.py3-none-any.whl.metadata (18 kB)
Collecting pytest (from executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for pytest from https://files.pythonhosted.org/packages/30/3d/64ad57c803f1fa1e963a7946b6e0fea4a70df53c1a7fed304586539c2bac/pytest-8.3.5-py3-none-any.whl.metadata
  Using cached pytest-8.3.5-py3-none-any.whl.metadata (7.6 kB)
Collecting pytest-xdist (from executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for pytest-xdist from https://files.pythonhosted.org/packages/6d/82/1d96bf03ee4c0fdc3c0cbe61470070e659ca78dc0086fb88b66c185e2449/pytest_xdist-3.6.1-py3-none-any.whl.metadata
  Using cached pytest_xdist-3.6.1-py3-none-any.whl.metadata (4.3 kB)
Requirement already satisfied: pyyaml in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (from executorch==0.5.0a0+1bc0699) (6.0.2)
Collecting ruamel.yaml (from executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for ruamel.yaml from https://files.pythonhosted.org/packages/c2/36/dfc1ebc0081e6d39924a2cc53654497f967a084a436bb64402dfce4254d9/ruamel.yaml-0.18.10-py3-none-any.whl.metadata
  Using cached ruamel.yaml-0.18.10-py3-none-any.whl.metadata (23 kB)
Requirement already satisfied: sympy in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (from executorch==0.5.0a0+1bc0699) (1.13.1)
Collecting tabulate (from executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for tabulate from https://files.pythonhosted.org/packages/40/44/4a5f08c96eb108af5cb50b41f76142f0afa346dfa99d5296fe7202a11854/tabulate-0.9.0-py3-none-any.whl.metadata
  Using cached tabulate-0.9.0-py3-none-any.whl.metadata (34 kB)
Requirement already satisfied: torch==2.6.0 in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (from executorch==0.5.0a0+1bc0699) (2.6.0+cpu)
Requirement already satisfied: torchaudio==2.6.0 in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (from executorch==0.5.0a0+1bc0699) (2.6.0+cpu)
Requirement already satisfied: torchvision==0.21.0 in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (from executorch==0.5.0a0+1bc0699) (0.21.0+cpu)
Requirement already satisfied: typing-extensions in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (from executorch==0.5.0a0+1bc0699) (4.12.2)
Collecting python-dateutil>=2.8.2 (from pandas==2.2.2->executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for python-dateutil>=2.8.2 from https://files.pythonhosted.org/packages/ec/57/56b9bcc3c9c6a792fcbaf139543cee77261f3651ca9da0c93f5c1221264b/python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata
  Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB)
Collecting pytz>=2020.1 (from pandas==2.2.2->executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for pytz>=2020.1 from https://files.pythonhosted.org/packages/eb/38/ac33370d784287baa1c3d538978b5e2ea064d4c1b93ffbd12826c190dd10/pytz-2025.1-py2.py3-none-any.whl.metadata
  Using cached pytz-2025.1-py2.py3-none-any.whl.metadata (22 kB)
Collecting tzdata>=2022.7 (from pandas==2.2.2->executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for tzdata>=2022.7 from https://files.pythonhosted.org/packages/0f/dd/84f10e23edd882c6f968c21c2434fe67bd4a528967067515feca9e611e5e/tzdata-2025.1-py2.py3-none-any.whl.metadata
  Using cached tzdata-2025.1-py2.py3-none-any.whl.metadata (1.4 kB)
Requirement already satisfied: filelock in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (from torch==2.6.0->executorch==0.5.0a0+1bc0699) (3.17.0)
Requirement already satisfied: networkx in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (from torch==2.6.0->executorch==0.5.0a0+1bc0699) (3.4.2)
Requirement already satisfied: jinja2 in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (from torch==2.6.0->executorch==0.5.0a0+1bc0699) (3.1.5)
Requirement already satisfied: fsspec in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (from torch==2.6.0->executorch==0.5.0a0+1bc0699) (2025.2.0)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (from torchvision==0.21.0->executorch==0.5.0a0+1bc0699) (11.1.0)
Requirement already satisfied: attrs>=22.2.0 in /home/adonnini1/.local/lib/python3.10/site-packages (from hypothesis->executorch==0.5.0a0+1bc0699) (23.2.0)
Collecting exceptiongroup>=1.0.0 (from hypothesis->executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for exceptiongroup>=1.0.0 from https://files.pythonhosted.org/packages/02/cc/b7e31358aac6ed1ef2bb790a9746ac2c69bcb3c8588b41616914eb106eaf/exceptiongroup-1.2.2-py3-none-any.whl.metadata
  Using cached exceptiongroup-1.2.2-py3-none-any.whl.metadata (6.6 kB)
Collecting sortedcontainers<3.0.0,>=2.1.0 (from hypothesis->executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for sortedcontainers<3.0.0,>=2.1.0 from https://files.pythonhosted.org/packages/32/46/9cb0e58b2deb7f82b84065f37f3bffeb12413f947f9388e4cac22c4621ce/sortedcontainers-2.4.0-py2.py3-none-any.whl.metadata
  Using cached sortedcontainers-2.4.0-py2.py3-none-any.whl.metadata (10 kB)
Collecting iniconfig (from pytest->executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for iniconfig from https://files.pythonhosted.org/packages/ef/a6/62565a6e1cf69e10f5727360368e451d4b7f58beeac6173dc9db836a5b46/iniconfig-2.0.0-py3-none-any.whl.metadata
  Using cached iniconfig-2.0.0-py3-none-any.whl.metadata (2.6 kB)
Collecting pluggy<2,>=1.5 (from pytest->executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for pluggy<2,>=1.5 from https://files.pythonhosted.org/packages/88/5f/e351af9a41f866ac3f1fac4ca0613908d9a41741cfcf2228f4ad853b697d/pluggy-1.5.0-py3-none-any.whl.metadata
  Using cached pluggy-1.5.0-py3-none-any.whl.metadata (4.8 kB)
Requirement already satisfied: tomli>=1 in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (from pytest->executorch==0.5.0a0+1bc0699) (2.2.1)
Collecting execnet>=2.1 (from pytest-xdist->executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for execnet>=2.1 from https://files.pythonhosted.org/packages/43/09/2aea36ff60d16dd8879bdb2f5b3ee0ba8d08cbbdcdfe870e695ce3784385/execnet-2.1.1-py3-none-any.whl.metadata
  Using cached execnet-2.1.1-py3-none-any.whl.metadata (2.9 kB)
Requirement already satisfied: ruamel.yaml.clib>=0.2.7 in /home/adonnini1/.local/lib/python3.10/site-packages (from ruamel.yaml->executorch==0.5.0a0+1bc0699) (0.2.8)
Collecting six>=1.5 (from python-dateutil>=2.8.2->pandas==2.2.2->executorch==0.5.0a0+1bc0699)
  Obtaining dependency information for six>=1.5 from https://files.pythonhosted.org/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl.metadata
  Using cached six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB)
Requirement already satisfied: MarkupSafe>=2.0 in /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages (from jinja2->torch==2.6.0->executorch==0.5.0a0+1bc0699) (3.0.2)
Using cached numpy-2.0.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (19.3 MB)
Using cached expecttest-0.3.0-py3-none-any.whl (8.2 kB)
Using cached flatbuffers-25.2.10-py2.py3-none-any.whl (30 kB)
Using cached hypothesis-6.127.5-py3-none-any.whl (483 kB)
Using cached parameterized-0.9.0-py2.py3-none-any.whl (20 kB)
Using cached pytest-8.3.5-py3-none-any.whl (343 kB)
Using cached pytest_xdist-3.6.1-py3-none-any.whl (46 kB)
Using cached ruamel.yaml-0.18.10-py3-none-any.whl (117 kB)
Using cached tabulate-0.9.0-py3-none-any.whl (35 kB)
Using cached exceptiongroup-1.2.2-py3-none-any.whl (16 kB)
Using cached execnet-2.1.1-py3-none-any.whl (40 kB)
Using cached pluggy-1.5.0-py3-none-any.whl (20 kB)
Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
Using cached pytz-2025.1-py2.py3-none-any.whl (507 kB)
Using cached sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB)
Using cached tzdata-2025.1-py2.py3-none-any.whl (346 kB)
Using cached iniconfig-2.0.0-py3-none-any.whl (5.9 kB)
Using cached six-1.17.0-py2.py3-none-any.whl (11 kB)
Building wheels for collected packages: executorch
  Running command Building wheel for executorch (pyproject.toml)
  running bdist_wheel
  running build
  command options for 'CustomBuild':
    build_base = pip-out
    build_purelib = pip-out/lib
    build_platlib = pip-out/lib.linux-x86_64-cpython-310
    build_lib = pip-out/lib.linux-x86_64-cpython-310
    build_scripts = pip-out/scripts-3.10
    build_temp = pip-out/temp.linux-x86_64-cpython-310
    plat_name = linux-x86_64
    compiler = None
    parallel = 31
    debug = None
    force = None
    executable = /home/adonnini1/anaconda3/envs/executorch/bin/python
  creating /home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/pip-out/temp.linux-x86_64-cpython-310/cmake-out
  deleting /home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/pip-out/temp.linux-x86_64-cpython-310/cmake-out/CMakeCache.txt
  cmake -S /home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch -B /home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/pip-out/temp.linux-x86_64-cpython-310/cmake-out -DBUCK2= -DPYTHON_EXECUTABLE=/home/adonnini1/anaconda3/envs/executorch/bin/python -DCMAKE_PREFIX_PATH=/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages -DCMAKE_BUILD_TYPE=Release -DEXECUTORCH_ENABLE_LOGGING=ON -DEXECUTORCH_LOG_LEVEL=Info -DCMAKE_OSX_DEPLOYMENT_TARGET=10.15 -DEXECUTORCH_SEPARATE_FLATCC_HOST_PROJECT=OFF -DEXECUTORCH_BUILD_KERNELS_CUSTOM=ON -DEXECUTORCH_BUILD_KERNELS_CUSTOM_AOT=ON -DEXECUTORCH_BUILD_KERNELS_QUANTIZED=ON -DEXECUTORCH_BUILD_KERNELS_QUANTIZED_AOT=ON
  -- The C compiler identification is GNU 12.2.0
  -- The CXX compiler identification is GNU 12.2.0
  -- Detecting C compiler ABI info
  -- Detecting C compiler ABI info - done
  -- Check for working C compiler: /usr/bin/cc - skipped
  -- Detecting C compile features
  -- Detecting C compile features - done
  -- Detecting CXX compiler ABI info
  -- Detecting CXX compiler ABI info - done
  -- Check for working CXX compiler: /usr/bin/c++ - skipped
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  CMake Deprecation Warning at backends/xnnpack/third-party/FXdiv/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED):
    Compatibility with CMake < 3.10 will be removed from a future version of
    CMake.

    Update the VERSION argument <min> value.  Or, use the <min>...<max> syntax
    to tell CMake that the project requires at least <min> but has been updated
    to work with policies introduced by <max> or earlier.


  CMake Deprecation Warning at backends/xnnpack/third-party/cpuinfo/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED):
    Compatibility with CMake < 3.10 will be removed from a future version of
    CMake.

    Update the VERSION argument <min> value.  Or, use the <min>...<max> syntax
    to tell CMake that the project requires at least <min> but has been updated
    to work with policies introduced by <max> or earlier.


  -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
  -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
  -- Found Threads: TRUE
  CMake Deprecation Warning at backends/xnnpack/third-party/pthreadpool/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED):
    Compatibility with CMake < 3.10 will be removed from a future version of
    CMake.

    Update the VERSION argument <min> value.  Or, use the <min>...<max> syntax
    to tell CMake that the project requires at least <min> but has been updated
    to work with policies introduced by <max> or earlier.


  -- Using python executable '/home/adonnini1/anaconda3/envs/executorch/bin/python'
  -- Resolved buck2 as /home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/buck2-bin/buck2-3bbde7daa94987db468d021ad625bc93dc62ba7fcb16945cb09b64aab077f284.
  -- Killing buck2 daemon
  '/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/buck2-bin/buck2-3bbde7daa94987db468d021ad625bc93dc62ba7fcb16945cb09b64aab077f284 killall'
  -- executorch: Generating source lists
  -- executorch: Generating source file list /home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/pip-out/temp.linux-x86_64-cpython-310/cmake-out/executorch_srcs.cmake
  Error while generating /home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/pip-out/temp.linux-x86_64-cpython-310/cmake-out/executorch_srcs.cmake. Exit code: 1
  Output:

  Error:
  Traceback (most recent call last):
    File "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/build/buck_util.py", line 26, in run
      cp: subprocess.CompletedProcess = subprocess.run(
    File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/subprocess.py", line 524, in run
      raise CalledProcessError(retcode, process.args,
  subprocess.CalledProcessError: Command '['/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/buck2-bin/buck2-3bbde7daa94987db468d021ad625bc93dc62ba7fcb16945cb09b64aab077f284', 'cquery', "inputs(deps('//runtime/executor:program'))"]' returned non-zero exit status 2.

  The above exception was the direct cause of the following exception:

  Traceback (most recent call last):
    File "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/build/extract_sources.py", line 232, in <module>
      main()
    File "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/build/extract_sources.py", line 217, in main
      target_to_srcs[name] = sorted(target.get_sources(graph, runner, buck_args))
    File "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/build/extract_sources.py", line 121, in get_sources
      sources: set[str] = set(runner.run(["cquery", query] + buck_args))
    File "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/build/buck_util.py", line 31, in run
      raise RuntimeError(ex.stderr.decode("utf-8")) from ex
  RuntimeError: Command failed:
  Error validating working directory

  Caused by:
      0: Failed to stat `/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/buck-out/v2`
      1: ENOENT: No such file or directory


  CMake Error at build/Utils.cmake:216 (message):
    executorch: source list generation failed
  Call Stack (most recent call first):
    CMakeLists.txt:387 (extract_sources)


  -- Configuring incomplete, errors occurred!
  error: command '/home/adonnini1/anaconda3/envs/executorch/bin/cmake' failed with exit code 1
  error: subprocess-exited-with-error
  
  × Building wheel for executorch (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> See above for output.
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  full command: /home/adonnini1/anaconda3/envs/executorch/bin/python /home/adonnini1/anaconda3/envs/executorch/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_wheel /tmp/tmpkjwekhbs
  cwd: /home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch
  Building wheel for executorch (pyproject.toml) ... error
  ERROR: Failed building wheel for executorch
Failed to build executorch
ERROR: Failed to build installable wheels for some pyproject.toml based projects (executorch)
Traceback (most recent call last):
  File "/home/adonnini1/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch/./install_requirements.py", line 198, in <module>
    subprocess.run(
  File "/home/adonnini1/anaconda3/envs/executorch/lib/python3.10/subprocess.py", line 524, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/home/adonnini1/anaconda3/envs/executorch/bin/python', '-m', 'pip', 'install', '.', '--no-build-isolation', '-v', '--extra-index-url', 'https://download.pytorch.org/whl/test/cpu']' returned non-zero exit status 1.
(executorch) adonnini1@actlnxlpt8:~/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ/executorch$ 

@adonnini
Copy link
Author

adonnini commented Mar 4, 2025

@guangy10

I tried another experiment.

I tried to set up Executorch in an empty folder. It worked.

I think this means that something in the folder I was trying to install it in was causing the problem.

Question: Can I copy the executorch folder from the folder where the set up worked to the application folder I was trying to (unsuccessfully) install it, and use it there?

Unless you have some questions, or need me to do something else, do you want to close this issue?

Thanks

@guangy10 guangy10 self-assigned this Mar 4, 2025
@guangy10 guangy10 removed the need-user-input The issue needs more information from the reporter before moving forward label Mar 4, 2025
@guangy10
Copy link
Contributor

guangy10 commented Mar 4, 2025

@guangy10

I tried another experiment.

I tried to set up Executorch in an empty folder. It worked.

I think this means that something in the folder I was trying to install it in was causing the problem.

Question: Can I copy the executorch folder from the folder where the set up worked to the application folder I was trying to (unsuccessfully) install it, and use it there?

Unless you have some questions, or need me to do something else, do you want to close this issue?

Thanks

I don't know how you set your project locally, so won't be able to comment about it. But in general, if you move the entire executorch folder and redo the setup from scratch, it should work (as it won't be different than clone the repo to any new location then set it up from there); however, moving some modules and relocating them to under other project may not work.

Let us know if anything else we can help.

@guangy10 guangy10 closed this as completed Mar 4, 2025
@github-project-automation github-project-automation bot moved this from To triage to Done in ExecuTorch DevX Mar 4, 2025
@guangy10 guangy10 reopened this Mar 5, 2025
@github-project-automation github-project-automation bot moved this from Done to Backlog in ExecuTorch DevX Mar 5, 2025
@guangy10 guangy10 added the need-user-input The issue needs more information from the reporter before moving forward label Mar 5, 2025
@byjlw
Copy link
Contributor

byjlw commented Mar 5, 2025

@guangy10

I tried another experiment.

I tried to set up Executorch in an empty folder. It worked.

I think this means that something in the folder I was trying to install it in was causing the problem.

Question: Can I copy the executorch folder from the folder where the set up worked to the application folder I was trying to (unsuccessfully) install it, and use it there?

Unless you have some questions, or need me to do something else, do you want to close this issue?

Thanks

I'd love to get more details on your exact project structure (before fixing the issue) so that we can solve the problem holistically and ensure other people don't get into this same situation.

@guangy10
Copy link
Contributor

guangy10 commented Mar 5, 2025

Particular, can you clarify a few things for us so that we can understand the use-cases and define the UX improvement that can benefit you and other developers who have similar needs:

  1. The location of executorch you setup initially.
  2. Where are you moving the executorch project to, and whether you are moving the entire executorch dir, or only sub-folders/sub-modules?
    2.1. If sub-folders/sub-modules, can you list what they are?
  3. What did you do after moving executorch? For example, did you create a new conda env under the new location, and try to re-run the installation, or reuse the existing one?

@guangy10 guangy10 removed the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Mar 5, 2025
@adonnini
Copy link
Author

adonnini commented Mar 5, 2025

@guangy10

This wont' be a complete answer but I thought I would start answering your questions before going to bed (I am in CET).

I have a model directory inside which I set up executorch. I have done this for two models many times since 2023. When there is a new version of exeuctorch or somehow I decide to set it up again, I rename the older version's directory to executorch<date>. Then I proceed to set up executorch again inside the model directory.
The week before last I did this for one model and everything worked.
Last week I did this for a second model and the problem occurred

2), 3)
Yesterday, I created an empty folder, set up executorch inside it. The installation was successful. Then I copied the entire executorch folder (including everything it contained) to the model directory where executorch installation failed. After the copy operation, I ran the model from a terminal window where conda environment for executorch was already active. Unfortunately model execution failed with the same error as when I ran it using previousl releases of Executorch (#6782)

Below you will find the output of ls -l for the model directory where executorch installation failed. Please note that I added run_python_script.sh and install_executorch.sh following your suggestions to try and resolve this issue.

Please let me know if you have any other questions and/or would like me to do anything else.

Thanks

ls -l OUTPUT

(executorch) adonnini1@actlnxlpt8:~/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ$ ls -l
total 10928
-rw-r--r--  1 adonnini1 adonnini1    22068 May 19  2024 collect_env.py
-rw-r--r--  1 adonnini1 adonnini1     9390 Aug 12  2024 dataloader.py
drwxrwxr-x 10 adonnini1 adonnini1     4096 Jun 30  2024 datasets
-rw-r--r--  1 adonnini1 adonnini1 10647409 Aug 12  2024 datasets081224.zip
drwxr-xr-x  3 adonnini1 adonnini1     4096 Aug 12  2024 datasets.old
drwxr-xr-x 26 adonnini1 adonnini1     4096 Mar  4 10:20 executorch
drwxr-xr-x 28 adonnini1 adonnini1     4096 Aug 12  2024 executorch081224
drwxr-xr-x 27 adonnini1 adonnini1     4096 Nov 12 07:33 executorch111224
-rw-r--r--  1 adonnini1 adonnini1    24320 Aug 12  2024 fad.png
-rwxrwxrwx  1 adonnini1 adonnini1      412 Mar  2 02:21 install_executorch.sh
-rw-r--r--  1 adonnini1 adonnini1    35149 Aug 12  2024 LICENSE
-rw-r--r--  1 adonnini1 adonnini1     2045 Mar  2 06:13 loadConfigurationAndCreateXandXdict.java
-rw-r--r--  1 adonnini1 adonnini1     1510 Mar  3 10:47 loadConfigurationAndCreateXandXdictV002.java
-rw-r--r--  1 adonnini1 adonnini1    20423 Aug 12  2024 loss.png
-rw-r--r--  1 adonnini1 adonnini1    23702 Aug 12  2024 mad.png
-rw-r--r--  1 adonnini1 adonnini1    18136 Aug 12  2024 model.py
drwxr-xr-x  2 adonnini1 adonnini1     4096 Dec 24 12:37 models
drwxr-xr-x  2 adonnini1 adonnini1     4096 Aug 12  2024 __pycache__
-rw-r--r--  1 adonnini1 adonnini1     2226 Mar  5 09:09 README.md
drwxr-xr-x  2 adonnini1 adonnini1     4096 Aug 12  2024 Results
-rwxrwxrwx  1 adonnini1 adonnini1      744 Mar  3 10:36 run_python_script.sh
-rw-r--r--  1 adonnini1 adonnini1    36359 Mar  5 09:09 train-minimum.py
-rw-r--r--  1 adonnini1 adonnini1    22612 Aug 12  2024 train.py
-rw-r--r--  1 adonnini1 adonnini1   170227 Aug 12  2024 trajectory-prediction.ipynb
-rw-r--r--  1 adonnini1 adonnini1    81849 Aug 12  2024 transformer_architecture.png
-rw-r--r--  1 adonnini1 adonnini1     4175 Aug 12  2024 utils.py
(executorch) adonnini1@actlnxlpt8:~/Development/ContextQSourceCode/NeuralNetworks/adonnini-trajectory-prediction-transformers-masterContextQ$ 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: user experience Issues related to reducing friction for users need-user-input The issue needs more information from the reporter before moving forward
Projects
Status: Backlog
Development

No branches or pull requests

3 participants