Skip to content

Readme Enhancement for requirements #581

@rufreakde

Description

@rufreakde

Please enhance the model information to provide some better overview for localLLM users.

The example below illustrates some values ( dont think its accurate)

model name type length VRAM (FP16) VRAM (FP32) Download
Qwen3-Coder-Next instruct 256k ~144 GB ~288 GB 🤗 Hugging Face • 🤖 ModelScope
Qwen3-Coder-Next-Base base 256k ~144 GB ~288 GB 🤗 Hugging Face • 🤖 ModelScope
Qwen3-Coder-480B-A35B-Instruct instruct 256k ~70 GB ~140 GB 🤗 Hugging Face • 🤖 ModelScope
Qwen3-Coder-30B-A3B-Instruct instruct 256k ~6 GB ~12 GB 🤗 Hugging Face • 🤖 ModelScope
Qwen3-Coder-Next-FP8 instruct 256k ~144 GB ~288 GB 🤗 Hugging Face • 🤖 ModelScope
Qwen3-Coder-Next-GGUF instruct 256k ~144 GB ~288 GB 🤗 Hugging Face • 🤖 ModelScope
Qwen3-Coder-480B-A35B-Instruct-FP8 instruct 256k ~70 GB ~140 GB 🤗 Hugging Face • 🤖 ModelScope
Qwen3-Coder-30B-A3B-Instruct-FP8 instruct 256k ~6 GB ~12 GB 🤗 Hugging Face • 🤖 ModelScope

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions