-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Open
Description
Please enhance the model information to provide some better overview for localLLM users.
The example below illustrates some values ( dont think its accurate)
| model name | type | length | VRAM (FP16) | VRAM (FP32) | Download |
|---|---|---|---|---|---|
| Qwen3-Coder-Next | instruct | 256k | ~144 GB | ~288 GB | 🤗 Hugging Face • 🤖 ModelScope |
| Qwen3-Coder-Next-Base | base | 256k | ~144 GB | ~288 GB | 🤗 Hugging Face • 🤖 ModelScope |
| Qwen3-Coder-480B-A35B-Instruct | instruct | 256k | ~70 GB | ~140 GB | 🤗 Hugging Face • 🤖 ModelScope |
| Qwen3-Coder-30B-A3B-Instruct | instruct | 256k | ~6 GB | ~12 GB | 🤗 Hugging Face • 🤖 ModelScope |
| Qwen3-Coder-Next-FP8 | instruct | 256k | ~144 GB | ~288 GB | 🤗 Hugging Face • 🤖 ModelScope |
| Qwen3-Coder-Next-GGUF | instruct | 256k | ~144 GB | ~288 GB | 🤗 Hugging Face • 🤖 ModelScope |
| Qwen3-Coder-480B-A35B-Instruct-FP8 | instruct | 256k | ~70 GB | ~140 GB | 🤗 Hugging Face • 🤖 ModelScope |
| Qwen3-Coder-30B-A3B-Instruct-FP8 | instruct | 256k | ~6 GB | ~12 GB | 🤗 Hugging Face • 🤖 ModelScope |
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels