Here is what I encountered when trying to load into 2 GPUs on my EC2 through Vllm
It gave me this Can't pickle <class 'botocore.client.S3'>: attribute lookup S3 on botocore.client failed
It was setup following this guide: https://docs.vllm.ai/en/stable/serving/runai_model_streamer.html
AWS Credential was set through environment variables of AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN
[ec2-user@ip-172-31-36-112 ~]$ vllm serve s3://llama/llama-3.1-8B --load-format runai_streamer --tensor-parallel-size 2 --model-loader-extra-config '{"concurrency":2}'
INFO 01-07 20:42:53 api_server.py:712] vLLM API server version 0.6.6.post1
INFO 01-07 20:42:53 api_server.py:713] args: Namespace(subparser='serve', model_tag='s3://llama/llama-3.1-8B', config='', host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='s3://llama/llama-3.1-8B', task='auto', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='runai_streamer', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='xgrammar', logits_processor_pattern=None, distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=2, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config='{"concurrency":2}', ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', generation_config=None, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, dispatch_function=<function serve at 0x7faf3ab7d940>)
INFO 01-07 20:42:53 api_server.py:199] Started engine process with PID 1982121
INFO 01-07 20:44:49 config.py:510] This model supports multiple tasks: {'classify', 'score', 'generate', 'embed', 'reward'}. Defaulting to 'generate'.
INFO 01-07 20:44:50 config.py:510] This model supports multiple tasks: {'embed', 'classify', 'reward', 'score', 'generate'}. Defaulting to 'generate'.
INFO 01-07 20:44:51 config.py:1310] Defaulting to use mp for distributed inference
WARNING 01-07 20:44:51 arg_utils.py:1103] Chunked prefill is enabled by default for models with max_model_len > 32K. Currently, chunked prefill might not work with some features or models. If you encounter any issues, please disable chunked prefill by setting --enable-chunked-prefill=False.
INFO 01-07 20:44:51 config.py:1458] Chunked prefill is enabled with max_num_batched_tokens=2048.
INFO 01-07 20:44:51 config.py:1310] Defaulting to use mp for distributed inference
WARNING 01-07 20:44:51 arg_utils.py:1103] Chunked prefill is enabled by default for models with max_model_len > 32K. Currently, chunked prefill might not work with some features or models. If you encounter any issues, please disable chunked prefill by setting --enable-chunked-prefill=False.
INFO 01-07 20:44:51 config.py:1458] Chunked prefill is enabled with max_num_batched_tokens=2048.
INFO 01-07 20:44:51 llm_engine.py:234] Initializing an LLM engine (v0.6.6.post1) with config: model='/tmp/tmprwsjgq9m', speculative_config=None, tokenizer='/tmp/tmptbmhhdeo', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=LoadFormat.RUNAI_STREAMER, tensor_parallel_size=2, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=s3://llama/llama-3.1-8B, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"candidate_compile_sizes":[],"compile_sizes":[],"capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=True,
INFO 01-07 20:44:51 custom_cache_manager.py:17] Setting Triton cache manager to: vllm.triton_utils.custom_cache_manager:CustomCacheManager
ERROR 01-07 20:44:51 engine.py:366] Can't pickle <class 'botocore.client.S3'>: attribute lookup S3 on botocore.client failed
ERROR 01-07 20:44:51 engine.py:366] Traceback (most recent call last):
ERROR 01-07 20:44:51 engine.py:366] File "/opt/conda/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine
ERROR 01-07 20:44:51 engine.py:366] engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
ERROR 01-07 20:44:51 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-07 20:44:51 engine.py:366] File "/opt/conda/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 119, in from_engine_args
ERROR 01-07 20:44:51 engine.py:366] return cls(ipc_path=ipc_path,
ERROR 01-07 20:44:51 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-07 20:44:51 engine.py:366] File "/opt/conda/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 71, in __init__
ERROR 01-07 20:44:51 engine.py:366] self.engine = LLMEngine(*args, **kwargs)
ERROR 01-07 20:44:51 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-07 20:44:51 engine.py:366] File "/opt/conda/lib/python3.12/site-packages/vllm/engine/llm_engine.py", line 273, in __init__
ERROR 01-07 20:44:51 engine.py:366] self.model_executor = executor_class(vllm_config=vllm_config, )
ERROR 01-07 20:44:51 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-07 20:44:51 engine.py:366] File "/opt/conda/lib/python3.12/site-packages/vllm/executor/distributed_gpu_executor.py", line 26, in __init__
ERROR 01-07 20:44:51 engine.py:366] super().__init__(*args, **kwargs)
ERROR 01-07 20:44:51 engine.py:366] File "/opt/conda/lib/python3.12/site-packages/vllm/executor/executor_base.py", line 36, in __init__
ERROR 01-07 20:44:51 engine.py:366] self._init_executor()
ERROR 01-07 20:44:51 engine.py:366] File "/opt/conda/lib/python3.12/site-packages/vllm/executor/multiproc_gpu_executor.py", line 58, in _init_executor
ERROR 01-07 20:44:51 engine.py:366] worker = ProcessWorkerWrapper(
ERROR 01-07 20:44:51 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^
ERROR 01-07 20:44:51 engine.py:366] File "/opt/conda/lib/python3.12/site-packages/vllm/executor/multiproc_worker_utils.py", line 167, in __init__
ERROR 01-07 20:44:51 engine.py:366] self.process.start()
ERROR 01-07 20:44:51 engine.py:366] File "/opt/conda/lib/python3.12/multiprocessing/process.py", line 121, in start
ERROR 01-07 20:44:51 engine.py:366] self._popen = self._Popen(self)
ERROR 01-07 20:44:51 engine.py:366] ^^^^^^^^^^^^^^^^^
ERROR 01-07 20:44:51 engine.py:366] File "/opt/conda/lib/python3.12/multiprocessing/context.py", line 289, in _Popen
ERROR 01-07 20:44:51 engine.py:366] return Popen(process_obj)
ERROR 01-07 20:44:51 engine.py:366] ^^^^^^^^^^^^^^^^^^
ERROR 01-07 20:44:51 engine.py:366] File "/opt/conda/lib/python3.12/multiprocessing/popen_spawn_posix.py", line 32, in __init__
ERROR 01-07 20:44:51 engine.py:366] super().__init__(process_obj)
ERROR 01-07 20:44:51 engine.py:366] File "/opt/conda/lib/python3.12/multiprocessing/popen_fork.py", line 19, in __init__
ERROR 01-07 20:44:51 engine.py:366] self._launch(process_obj)
ERROR 01-07 20:44:51 engine.py:366] File "/opt/conda/lib/python3.12/multiprocessing/popen_spawn_posix.py", line 47, in _launch
ERROR 01-07 20:44:51 engine.py:366] reduction.dump(process_obj, fp)
ERROR 01-07 20:44:51 engine.py:366] File "/opt/conda/lib/python3.12/multiprocessing/reduction.py", line 60, in dump
ERROR 01-07 20:44:51 engine.py:366] ForkingPickler(file, protocol).dump(obj)
ERROR 01-07 20:44:51 engine.py:366] _pickle.PicklingError: Can't pickle <class 'botocore.client.S3'>: attribute lookup S3 on botocore.client failed
Process SpawnProcess-1:
Traceback (most recent call last):
File "/opt/conda/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/opt/conda/lib/python3.12/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/conda/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 368, in run_mp_engine
raise e
File "/opt/conda/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine
engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 119, in from_engine_args
return cls(ipc_path=ipc_path,
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 71, in __init__
self.engine = LLMEngine(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/vllm/engine/llm_engine.py", line 273, in __init__
self.model_executor = executor_class(vllm_config=vllm_config, )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/vllm/executor/distributed_gpu_executor.py", line 26, in __init__
super().__init__(*args, **kwargs)
File "/opt/conda/lib/python3.12/site-packages/vllm/executor/executor_base.py", line 36, in __init__
self._init_executor()
File "/opt/conda/lib/python3.12/site-packages/vllm/executor/multiproc_gpu_executor.py", line 58, in _init_executor
worker = ProcessWorkerWrapper(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/vllm/executor/multiproc_worker_utils.py", line 167, in __init__
self.process.start()
File "/opt/conda/lib/python3.12/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/multiprocessing/context.py", line 289, in _Popen
return Popen(process_obj)
^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/opt/conda/lib/python3.12/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/opt/conda/lib/python3.12/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/opt/conda/lib/python3.12/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'botocore.client.S3'>: attribute lookup S3 on botocore.client failed
Traceback (most recent call last):
File "/opt/conda/bin/vllm", line 8, in <module>
sys.exit(main())
^^^^^^
File "/opt/conda/lib/python3.12/site-packages/vllm/scripts.py", line 201, in main
args.dispatch_function(args)
File "/opt/conda/lib/python3.12/site-packages/vllm/scripts.py", line 42, in serve
uvloop.run(run_server(args))
File "/opt/conda/lib/python3.12/site-packages/uvloop/__init__.py", line 109, in run
return __asyncio.run(
^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/opt/conda/lib/python3.12/site-packages/uvloop/__init__.py", line 61, in wrapper
return await main
^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 740, in run_server
async with build_async_engine_client(args) as engine_client:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 118, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 223, in build_async_engine_client_from_engine_args
raise RuntimeError(
RuntimeError: Engine process failed to start. See stack trace for the root cause.
Your current environment
The output of `python collect_env.py`
Model Input Dumps
No response
🐛 Describe the bug
Here is what I encountered when trying to load into 2 GPUs on my EC2 through Vllm
It gave me this
Can't pickle <class 'botocore.client.S3'>: attribute lookup S3 on botocore.client failedIt was setup following this guide: https://docs.vllm.ai/en/stable/serving/runai_model_streamer.html
AWS Credential was set through environment variables of
AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYandAWS_SESSION_TOKENCommand line used:
vllm serve s3://llama/llama-3.1-8B --load-format runai_streamer --tensor-parallel-size 2 --model-loader-extra-config '{"concurrency":2}'Before submitting a new issue...