-
-
Notifications
You must be signed in to change notification settings - Fork 12.6k
Closed as not planned
Labels
bugSomething isn't workingSomething isn't workingstaleOver 90 days of inactivityOver 90 days of inactivity
Description
Your current environment
The output of python collect_env.py
vllm==0.8.5.post1
🐛 Describe the bug
I'm running into a problem when deploying Mistral 3.1 with LoRA adapters. In particular, I'm getting the following message:
| Mistral3.1-24B | 7 | UNAVAILABLE: Internal: AssertionError: PixtralForConditionalGeneration does not support LoRA yet.
The LoRA adapter was generated by using the specific version of the transfomers library as outlined here:
https://github.com/huggingface/transformers/releases/tag/v4.49.0-Mistral-3
Based on the suggestion here, AutoModelForImageTextToText was used.
Below is the vLLM config used when deploying Mistal 3.1 with LoRA:
{
"model": "/mnt/static_files/weights/Mistral-Small-3.1-24B-Instruct-2503",
"disable_log_requests": "true",
"gpu_memory_utilization": 0.8,
"max_model_len": 12000,
"max_num_batched_tokens": 12000,
"tensor_parallel_size": 1,
"enable_prefix_caching": true,
"enable_chunked_prefill": true,
"enable_lora": true,
"max_lora_rank": 64,
"max_num_seqs": 4,
"tokenizer_mode": "mistral",
"config_format": "mistral",
"load_format": "mistral",
"guided_decoding_backend": "xgrammar"
}
Any insights on how to resolve this problem are appreciated! Thank you.
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingstaleOver 90 days of inactivityOver 90 days of inactivity