Skip to content

External LLM Usage Notice(外部 LLM 使用说明) #985

@tanzhenxin

Description

@tanzhenxin

Thank you for using Qwen Code!
We’ve noticed several issues related to external LLM providers and models.
To clarify our support policy, please refer to the following 👇


1️⃣ Priority Support

For users using Qwen OAuth Auth type and the Aliyun Bailian inference platform:

  • We provide full and prioritized support for all model, inference, and performance issues.
  • This includes model invocation errors, inference performance issues, and quality optimization.

2️⃣ Limited Support

For users using external LLM providers or models (e.g., OpenAI, Deepseek, Kimi, GLM, etc.):

  • We offer limited support;
  • Blocking issues (such as runtime errors or API failures) will be handled with priority;
  • Feature requests and performance feedback are currently outside official support;
  • We warmly welcome community PRs to improve related areas.

3️⃣ Local Inference Disclaimer

For users running local model inference:

  • Due to hardware and model constraints, frequent issues may occur;
  • Such cases are not officially supported;
  • We strongly recommend using an online inference platform — especially Qwen OAuth,
    which offers 2000 free API calls per day.

感谢大家使用 Qwen Code!我们注意到 issue 区中有部分关于使用 “外部 LLM” 推理平台和模型的问题。为了方便大家了解支持范围,统一说明如下 👇


1️⃣ 优先支持范围

对于使用 Qwen OAuth Auth typeAliyun Bailian 推理平台 的用户:

  • 我们将 优先支持所有模型、推理与效果相关问题
  • 包括但不限于:模型调用异常、推理性能问题、效果反馈与优化。

2️⃣ 有限支持范围

对于使用 外部 LLM 推理平台和模型 的用户(如 Deepseek、Kimi、GLM、Idealab 等):

  • 我们仅提供 有限支持
  • 优先处理阻塞性问题(例如无法运行、API 报错等);
  • 新特性或效果类问题 暂不在官方支持范围内;
  • 欢迎社区用户通过 PR 参与改进与优化。

3️⃣ 本地模型推理说明

对于使用 本地模型推理 的用户,由于模型性能和环境限制:

  • 基础功能可能会频繁出错;
  • 此类问题暂不在支持范围内;
  • 建议使用在线推理平台,特别推荐 Qwen OAuth
    目前每日提供 2000 次免费调用额度

Metadata

Metadata

Assignees

Labels

FAQThis will not be worked on

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions