-
-
Notifications
You must be signed in to change notification settings - Fork 12.4k
Closed as not planned
Closed as not planned
Copy link
Labels
bugSomething isn't workingSomething isn't workingstaleOver 90 days of inactivityOver 90 days of inactivity
Description
Your current environment
INFO 06-19 02:57:16 [__init__.py:244] Automatically detected platform cuda.
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.4 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : version 3.30.2
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.7.0+cu126
Is debug build : False
CUDA used to build PyTorch : 12.6
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-5.4.0-167-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.6.20
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration : GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version : 535.216.03
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7763 64-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 2450.0000
CPU min MHz: 1500.0000
BogoMIPS: 4890.72
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-31
NUMA node1 CPU(s): 32-63
NUMA node2 CPU(s): 64-95
NUMA node3 CPU(s): 96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] flake8==7.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cudnn-frontend==1.5.2
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-cufile-cu12==1.11.1.6
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-dali-cuda120==1.40.0
[pip3] nvidia-ml-py==12.575.51
[pip3] nvidia-ml-py3==7.352.0
[pip3] nvidia-modelopt==0.15.0
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvimgcodec-cu12==0.3.0.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] nvidia-pyindex==1.0.9
[pip3] nvidia-smi==0.1.3
[pip3] onnx==1.16.1
[pip3] onnxruntime-gpu==1.17.1
[pip3] onnxsim==0.4.36
[pip3] open-clip-torch==2.24.0
[pip3] optree==0.13.0
[pip3] pynvml==12.0.0
[pip3] pytorch-lightning==2.2.4
[pip3] pytorch-triton==3.0.0+dedb7bdf3
[pip3] pyzmq==26.2.0
[pip3] sentence-transformers==4.1.0
[pip3] torch==2.7.0
[pip3] torchaudio==2.7.0
[pip3] torchmetrics==1.4.0.post0
[pip3] torchpack==0.3.1
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.22.0
[pip3] transformers==4.53.0.dev0
[pip3] transformers-stream-generator==0.0.5
[pip3] triton==3.3.0
[conda] Could not collect
==============================
vLLM Info
==============================
ROCM Version : Could not collect
Neuron SDK Version : N/A
vLLM Version : 0.9.1
vLLM Build Flags:
CUDA Archs: 5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 NIC0 NIC1 NIC2 NIC3 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PXB SYS SYS SYS 32-63 1 N/A
NIC0 PXB X SYS SYS SYS
NIC1 SYS SYS X SYS SYS
NIC2 SYS SYS SYS X PIX
NIC3 SYS SYS SYS PIX X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
==============================
Environment Variables
==============================
NVIDIA_VISIBLE_DEVICES=GPU-e77165e1-0491-57a4-d10c-852cbce0cf61
CUBLAS_VERSION=12.6.0.22
NVIDIA_REQUIRE_CUDA=cuda>=9.0
CUDA_CACHE_DISABLE=1
TORCH_CUDA_ARCH_LIST=5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX
NCCL_VERSION=2.22.3
NVIDIA_DRIVER_CAPABILITIES=video,compute,utility,graphics
NVIDIA_PRODUCT_NAME=PyTorch
CUDA_VERSION=12.6.0.022
PYTORCH_VERSION=2.5.0a0+872d972
PYTORCH_BUILD_NUMBER=0
CUDNN_FRONTEND_VERSION=1.5.2
CUDNN_VERSION=9.3.0.75
PYTORCH_HOME=/opt/pytorch/pytorch
LD_LIBRARY_PATH=/usr/local/lib/python3.10/dist-packages/torch/lib:/usr/local/lib/python3.10/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
NVIDIA_BUILD_ID=107063150
CUDA_DRIVER_VERSION=560.35.03
PYTORCH_BUILD_VERSION=2.5.0a0+872d972
CUDA_HOME=/usr/local/cuda
CUDA_HOME=/usr/local/cuda
CUDA_MODULE_LOADING=LAZY
NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=
NVIDIA_PYTORCH_VERSION=24.08
TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1
NCCL_CUMEM_ENABLE=0
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
🐛 Describe the bug
When I use vllm to infer Qwen/Qwen2.5-Omni-3B model , I meet an init error
from vllm import LLM
llm = LLM(model="Qwen/Qwen2.5-Omni-3B")
outputs = llm.generate('你是谁', SamplingParams(temperature=0.8, top_p=0.95))
print(outputs)The complete log is as follows
INFO 06-19 02:48:20 [__init__.py:244] Automatically detected platform cuda.
Unrecognized keys in `rope_scaling` for 'rope_type'='default': {'mrope_section'}
INFO 06-19 02:49:07 [config.py:823] This model supports multiple tasks: {'classify', 'reward', 'score', 'generate', 'embed'}. Defaulting to 'generate'.
INFO 06-19 02:49:08 [config.py:2195] Chunked prefill is enabled with max_num_batched_tokens=8192.
INFO 06-19 02:49:16 [core.py:455] Waiting for init message from front-end.
INFO 06-19 02:49:16 [core.py:70] Initializing a V1 LLM engine (v0.9.1) with config: model='Qwen/Qwen2.5-Omni-3B', speculative_config=None, tokenizer='Qwen/Qwen2.5-Omni-3B', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=Qwen/Qwen2.5-Omni-3B, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":512,"local_cache_dir":null}
WARNING 06-19 02:49:17 [utils.py:2737] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7f531dc0a050>
INFO 06-19 02:49:18 [parallel_state.py:1065] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
You have video processor config saved in `preprocessor.json` file which is deprecated. Video processor configs should be saved in their own `video_preprocessor.json` file. You can rename the file or load and save the processor back which renames it automatically. Loading from `preprocessor.json` will be removed in v5.0.
ERROR 06-19 02:49:35 [core.py:515] EngineCore failed to start.
ERROR 06-19 02:49:35 [core.py:515] Traceback (most recent call last):
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 506, in run_engine_core
ERROR 06-19 02:49:35 [core.py:515] engine_core = EngineCoreProc(*args, **kwargs)
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 390, in __init__
ERROR 06-19 02:49:35 [core.py:515] super().__init__(vllm_config, executor_class, log_stats,
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 76, in __init__
ERROR 06-19 02:49:35 [core.py:515] self.model_executor = executor_class(vllm_config)
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 53, in __init__
ERROR 06-19 02:49:35 [core.py:515] self._init_executor()
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 47, in _init_executor
ERROR 06-19 02:49:35 [core.py:515] self.collective_rpc("init_device")
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 57, in collective_rpc
ERROR 06-19 02:49:35 [core.py:515] answer = run_method(self.driver_worker, method, args, kwargs)
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/utils.py", line 2671, in run_method
ERROR 06-19 02:49:35 [core.py:515] return func(*args, **kwargs)
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 606, in init_device
ERROR 06-19 02:49:35 [core.py:515] self.worker.init_device() # type: ignore
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 160, in init_device
ERROR 06-19 02:49:35 [core.py:515] self.model_runner: GPUModelRunner = GPUModelRunner(
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/worker/gpu_model_runner.py", line 129, in __init__
ERROR 06-19 02:49:35 [core.py:515] encoder_compute_budget, encoder_cache_size = compute_encoder_budget(
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/core/encoder_cache_manager.py", line 95, in compute_encoder_budget
ERROR 06-19 02:49:35 [core.py:515] ) = _compute_encoder_budget_multimodal(
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/core/encoder_cache_manager.py", line 125, in _compute_encoder_budget_multimodal
ERROR 06-19 02:49:35 [core.py:515] .get_max_tokens_per_item_by_nonzero_modality(model_config)
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/multimodal/registry.py", line 153, in get_max_tokens_per_item_by_nonzero_modality
ERROR 06-19 02:49:35 [core.py:515] mm_limits = self.get_mm_limits_per_prompt(model_config)
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/multimodal/registry.py", line 206, in get_mm_limits_per_prompt
ERROR 06-19 02:49:35 [core.py:515] processor = self.create_processor(model_config, disable_cache=False)
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/multimodal/registry.py", line 281, in create_processor
ERROR 06-19 02:49:35 [core.py:515] return factories.build_processor(ctx, cache=cache)
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/multimodal/registry.py", line 88, in build_processor
ERROR 06-19 02:49:35 [core.py:515] return self.processor(info, dummy_inputs_builder, cache=cache)
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/multimodal/processing.py", line 1131, in __init__
ERROR 06-19 02:49:35 [core.py:515] self.data_parser = self._get_data_parser()
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_5_omni_thinker.py", line 238, in _get_data_parser
ERROR 06-19 02:49:35 [core.py:515] feature_extractor = self.info.get_feature_extractor()
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_5_omni_thinker.py", line 170, in get_feature_extractor
ERROR 06-19 02:49:35 [core.py:515] hf_processor = self.get_hf_processor(sampling_rate=sampling_rate)
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_5_omni_thinker.py", line 147, in get_hf_processor
ERROR 06-19 02:49:35 [core.py:515] processor = self.ctx.get_hf_processor(
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/inputs/registry.py", line 131, in get_hf_processor
ERROR 06-19 02:49:35 [core.py:515] return super().get_hf_processor(
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/inputs/registry.py", line 94, in get_hf_processor
ERROR 06-19 02:49:35 [core.py:515] return cached_processor_from_config(
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/transformers_utils/processor.py", line 110, in cached_processor_from_config
ERROR 06-19 02:49:35 [core.py:515] return cached_get_processor(
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/transformers_utils/processor.py", line 72, in get_processor
ERROR 06-19 02:49:35 [core.py:515] processor = processor_factory.from_pretrained(
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/transformers/processing_utils.py", line 1213, in from_pretrained
ERROR 06-19 02:49:35 [core.py:515] return cls.from_args_and_dict(args, processor_dict, **kwargs)
ERROR 06-19 02:49:35 [core.py:515] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/transformers/processing_utils.py", line 1014, in from_args_and_dict
ERROR 06-19 02:49:35 [core.py:515] processor = cls(*args, **valid_kwargs)
ERROR 06-19 02:49:35 [core.py:515] TypeError: Qwen2_5OmniProcessor.__init__() got multiple values for argument 'image_processor'
Process EngineCore_0:
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 519, in run_engine_core
raise e
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 506, in run_engine_core
engine_core = EngineCoreProc(*args, **kwargs)
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 390, in __init__
super().__init__(vllm_config, executor_class, log_stats,
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 76, in __init__
self.model_executor = executor_class(vllm_config)
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 53, in __init__
self._init_executor()
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 47, in _init_executor
self.collective_rpc("init_device")
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 57, in collective_rpc
answer = run_method(self.driver_worker, method, args, kwargs)
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/utils.py", line 2671, in run_method
return func(*args, **kwargs)
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 606, in init_device
self.worker.init_device() # type: ignore
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 160, in init_device
self.model_runner: GPUModelRunner = GPUModelRunner(
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/worker/gpu_model_runner.py", line 129, in __init__
encoder_compute_budget, encoder_cache_size = compute_encoder_budget(
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/core/encoder_cache_manager.py", line 95, in compute_encoder_budget
) = _compute_encoder_budget_multimodal(
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/core/encoder_cache_manager.py", line 125, in _compute_encoder_budget_multimodal
.get_max_tokens_per_item_by_nonzero_modality(model_config)
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/multimodal/registry.py", line 153, in get_max_tokens_per_item_by_nonzero_modality
mm_limits = self.get_mm_limits_per_prompt(model_config)
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/multimodal/registry.py", line 206, in get_mm_limits_per_prompt
processor = self.create_processor(model_config, disable_cache=False)
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/multimodal/registry.py", line 281, in create_processor
return factories.build_processor(ctx, cache=cache)
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/multimodal/registry.py", line 88, in build_processor
return self.processor(info, dummy_inputs_builder, cache=cache)
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/multimodal/processing.py", line 1131, in __init__
self.data_parser = self._get_data_parser()
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_5_omni_thinker.py", line 238, in _get_data_parser
feature_extractor = self.info.get_feature_extractor()
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_5_omni_thinker.py", line 170, in get_feature_extractor
hf_processor = self.get_hf_processor(sampling_rate=sampling_rate)
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_5_omni_thinker.py", line 147, in get_hf_processor
processor = self.ctx.get_hf_processor(
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/inputs/registry.py", line 131, in get_hf_processor
return super().get_hf_processor(
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/inputs/registry.py", line 94, in get_hf_processor
return cached_processor_from_config(
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/transformers_utils/processor.py", line 110, in cached_processor_from_config
return cached_get_processor(
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/transformers_utils/processor.py", line 72, in get_processor
processor = processor_factory.from_pretrained(
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/transformers/processing_utils.py", line 1213, in from_pretrained
return cls.from_args_and_dict(args, processor_dict, **kwargs)
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/transformers/processing_utils.py", line 1014, in from_args_and_dict
processor = cls(*args, **valid_kwargs)
TypeError: Qwen2_5OmniProcessor.__init__() got multiple values for argument 'image_processor'
Traceback (most recent call last):
File "/dc-hl/jun.zhou10/swift/eval/infer_vllm.py", line 65, in <module>
llm = LLM(model="Qwen/Qwen2.5-Omni-3B")
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 243, in __init__
self.llm_engine = LLMEngine.from_engine_args(
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 501, in from_engine_args
return engine_cls.from_vllm_config(
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/engine/llm_engine.py", line 124, in from_vllm_config
return cls(vllm_config=vllm_config,
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/engine/llm_engine.py", line 101, in __init__
self.engine_core = EngineCoreClient.make_client(
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/engine/core_client.py", line 75, in make_client
return SyncMPClient(vllm_config, executor_class, log_stats)
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/engine/core_client.py", line 558, in __init__
super().__init__(
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/engine/core_client.py", line 422, in __init__
self._init_engines_direct(vllm_config, local_only,
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/engine/core_client.py", line 491, in _init_engines_direct
self._wait_for_engine_startup(handshake_socket, input_address,
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/engine/core_client.py", line 511, in _wait_for_engine_startup
wait_for_engine_startup(
File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/v1/utils.py", line 494, in wait_for_engine_startup
raise RuntimeError("Engine core initialization failed. "
RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {}
the transformers commit id I used is eac84689
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingstaleOver 90 days of inactivityOver 90 days of inactivity