Skip to content

Fail to run voxtral_runner on Metal backend #15649

@seyeong-han

Description

@seyeong-han

🐛 Describe the bug

Running voxtral_runner with preprocessor

I was able to export the Voxtral model to Metal backend after having incompatible issues.
And I got this error while running voxtral_runner.

./cmake-out/examples/models/voxtral/voxtral_runner \
  --model_path voxtral/model.pte \
  --tokenizer_path voxtral/tekken.json \
  --prompt "What can you tell me about this audio?" \
  --audio_path voxtral/sample_audio.wav \
  --processor_path voxtral/voxtral_preprocessor.pte

I tokenizers:regex.cpp:27] Registering override fallback regex
I tokenizers:tekken.cpp:92] Loading Tekken tokenizer from: voxtral/tekken.json
I tokenizers:tekken.cpp:125] Tekken version: v7, vocab_size: 131072, special_tokens: 1000
I tokenizers:tekken.cpp:131] Loading special tokens from JSON
I tokenizers:tekken.cpp:307] Initialized 1000 special tokens (1000 defined, 0 placeholders)
I tokenizers:tekken.cpp:148] Loading 130072 vocabulary tokens
I tokenizers:tekken.cpp:247] Processing 130072 vocabulary entries (limit: 130072)
I tokenizers:tekken.cpp:280] Built vocabulary with 130072 tokens
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1762463354.663646 41470182 re2.cc:237] Error parsing '([^\r\n\p{L}\p{N}]?[\p{Lu}\p{Lt}\p{Lm}\p{Lo}\p{M}]*[\p{Ll}\p{Lm}\p{Lo}\p{M}]+|[^\r\n\p{L}\p{N}]?[\p{...': invalid perl operator: (?!
I tokenizers:re2_regex.cpp:27] Re2 failed to compile regex: ([^\r\n\p{L}\p{N}]?[\p{Lu}\p{Lt}\p{Lm}\p{Lo}\p{M}]*[\p{Ll}\p{Lm}\p{Lo}\p{M}]+|[^\r\n\p{L}\p{N}]?[\p{Lu}\p{Lt}\p{Lm}\p{Lo}\p{M}]+[\p{Ll}\p{Lm}\p{Lo}\p{M}]*|\p{N}| ?[^\s\p{L}\p{N}]+[\r\n/]*|\s*[\r\n]+|\s+(?!\S)|\s+), error: inv$
I tokenizers:regex_lookahead.cpp:27] Creating PCRE2 regex
I tokenizers:tekken.cpp:206] Tekken tokenizer loaded successfully. Vocab size: 131072, BOS: 1, EOS: 2
Error: aoti_torch__reinterpret_tensor( self, ndim, sizes_ptr, strides_ptr, storage_offset, &result) API call failed at /Users/younghan/miniconda3/envs/executorch/lib/python3.12/site-packages/torch/include/torch/csrc/inductor/aoti_runtime/utils.h, line 382

Reproduce

1. Install and Build for Metal

# Install ExecuTorch with Metal support
CMAKE_ARGS="-DEXECUTORCH_BUILD_METAL=ON" ./install_executorch.sh

# Build the multimodal runner with Metal
cmake --preset llm \
      -DEXECUTORCH_BUILD_METAL=ON \
      -DCMAKE_INSTALL_PREFIX=cmake-out \
      -DCMAKE_BUILD_TYPE=Release \
      -Bcmake-out -S.
cmake --build cmake-out -j16 --target install --config Release

cmake -DEXECUTORCH_BUILD_METAL=ON \
      -DCMAKE_BUILD_TYPE=Release \
      -Sexamples/models/voxtral \
      -Bcmake-out/examples/models/voxtral/
cmake --build cmake-out/examples/models/voxtral --target voxtral_runner --config Release

2. Export Metal backend model.pte

Compatible versions to export Voxtral model.

3. Convert mp3 to wav

Downloaded an mp3 sample audio and converted it to wav

afconvert -f WAVE -d LEI16 voxtral/sample_audio.mp3 voxtral/sample_audio.wav

4. Run voxtral_runner

./cmake-out/examples/models/voxtral/voxtral_runner \
  --model_path voxtral/model.pte \
  --tokenizer_path voxtral/tekken.json \
  --prompt "What can you tell me about this audio?" \
  --audio_path voxtral/sample_audio.wav \
  --processor_path voxtral/voxtral_preprocessor.pte

I tokenizers:regex.cpp:27] Registering override fallback regex
I tokenizers:tekken.cpp:92] Loading Tekken tokenizer from: voxtral/tekken.json
I tokenizers:tekken.cpp:125] Tekken version: v7, vocab_size: 131072, special_tokens: 1000
I tokenizers:tekken.cpp:131] Loading special tokens from JSON
I tokenizers:tekken.cpp:307] Initialized 1000 special tokens (1000 defined, 0 placeholders)
I tokenizers:tekken.cpp:148] Loading 130072 vocabulary tokens
I tokenizers:tekken.cpp:247] Processing 130072 vocabulary entries (limit: 130072)
I tokenizers:tekken.cpp:280] Built vocabulary with 130072 tokens
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1762463354.663646 41470182 re2.cc:237] Error parsing '([^\r\n\p{L}\p{N}]?[\p{Lu}\p{Lt}\p{Lm}\p{Lo}\p{M}]*[\p{Ll}\p{Lm}\p{Lo}\p{M}]+|[^\r\n\p{L}\p{N}]?[\p{...': invalid perl operator: (?!
I tokenizers:re2_regex.cpp:27] Re2 failed to compile regex: ([^\r\n\p{L}\p{N}]?[\p{Lu}\p{Lt}\p{Lm}\p{Lo}\p{M}]*[\p{Ll}\p{Lm}\p{Lo}\p{M}]+|[^\r\n\p{L}\p{N}]?[\p{Lu}\p{Lt}\p{Lm}\p{Lo}\p{M}]+[\p{Ll}\p{Lm}\p{Lo}\p{M}]*|\p{N}| ?[^\s\p{L}\p{N}]+[\r\n/]*|\s*[\r\n]+|\s+(?!\S)|\s+), error: inv$
I tokenizers:regex_lookahead.cpp:27] Creating PCRE2 regex
I tokenizers:tekken.cpp:206] Tekken tokenizer loaded successfully. Vocab size: 131072, BOS: 1, EOS: 2
Error: aoti_torch__reinterpret_tensor( self, ndim, sizes_ptr, strides_ptr, storage_offset, &result) API call failed at /Users/younghan/miniconda3/envs/executorch/lib/python3.12/site-packages/torch/include/torch/csrc/inductor/aoti_runtime/utils.h, line 382

Versions

PyTorch version: 2.10.0.dev20251015
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 15.7.1 (arm64)
GCC version: Could not collect
Clang version: 17.0.0 (clang-1700.3.19.1)
CMake version: version 3.31.6
Libc version: N/A

Python version: 3.12.12 | packaged by Anaconda, Inc. | (main, Oct 21 2025, 20:07:49) [Clang 20.1.8 ] (64-bit runtime)
Python platform: macOS-15.7.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Apple M3 Pro

Versions of relevant libraries:
[pip3] executorch==1.1.0a0+4a75896
[pip3] mypy_extensions==1.1.0
[pip3] numpy==2.3.4
[pip3] optimum-executorch==0.2.0.dev0
[pip3] pytorch_tokenizers==1.0.1
[pip3] torch==2.10.0.dev20251015
[pip3] torchao==0.14.0+git01849b2b1
[pip3] torchaudio==2.8.0.dev20251015
[pip3] torchdata==0.11.0
[pip3] torchsr==1.0.4
[pip3] torchtune==0.6.1
[pip3] torchvision==0.25.0.dev20251015
[conda] executorch 1.1.0a0+4a75896 pypi_0 pypi
[conda] numpy 2.3.4 pypi_0 pypi
[conda] optimum-executorch 0.2.0.dev0 pypi_0 pypi
[conda] pytorch-tokenizers 1.0.1 pypi_0 pypi
[conda] torch 2.10.0.dev20251015 pypi_0 pypi
[conda] torchao 0.14.0+git01849b2b1 pypi_0 pypi
[conda] torchaudio 2.8.0.dev20251015 pypi_0 pypi
[conda] torchdata 0.11.0 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchtune 0.6.1 pypi_0 pypi
[conda] torchvision 0.25.0.dev20251015 pypi_0 pypi

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions