Skip to content

Commit de66ee1

Browse files
authored
Merge pull request #2820 from QwenLM/feat/qwen3.6-plus-support
🎉 feat: add Qwen3.6-Plus model support
2 parents 92f7549 + 1f51e06 commit de66ee1

File tree

12 files changed

+46
-125
lines changed

12 files changed

+46
-125
lines changed

README.md

Lines changed: 18 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -18,19 +18,23 @@
1818

1919
</div>
2020

21-
> 🎉 **News (2026-02-16)**: Qwen3.5-Plus is now live! Sign in via Qwen OAuth to use it directly, or get an API key from [Alibaba Cloud ModelStudio](https://modelstudio.console.alibabacloud.com?tab=doc#/doc/?type=model&url=2840914_2&modelId=group-qwen3.5-plus) to access it through the OpenAI-compatible API.
21+
## 🎉 News
2222

23-
Qwen Code is an open-source AI agent for the terminal, optimized for [Qwen3-Coder](https://github.com/QwenLM/Qwen3-Coder). It helps you understand large codebases, automate tedious work, and ship faster.
23+
- **2026-04-02**: Qwen3.6-Plus is now live! Sign in via Qwen OAuth to use it directly, or get an API key from [Alibaba Cloud ModelStudio](https://modelstudio.console.alibabacloud.com/ap-southeast-1?tab=doc#/doc/?type=model&url=2840914_2&modelId=qwen3.6-plus) to access it through the OpenAI-compatible API.
2424

25-
![](https://gw.alicdn.com/imgextra/i1/O1CN01D2DviS1wwtEtMwIzJ_!!6000000006373-2-tps-1600-900.png)
25+
- **2026-02-16**: Qwen3.5-Plus is now live!
2626

2727
## Why Qwen Code?
2828

29+
Qwen Code is an open-source AI agent for the terminal, optimized for Qwen series models. It helps you understand large codebases, automate tedious work, and ship faster.
30+
2931
- **Multi-protocol, OAuth free tier**: use OpenAI / Anthropic / Gemini-compatible APIs, or sign in with Qwen OAuth for 1,000 free requests/day.
3032
- **Open-source, co-evolving**: both the framework and the Qwen3-Coder model are open-source—and they ship and evolve together.
3133
- **Agentic workflow, feature-rich**: rich built-in tools (Skills, SubAgents) for a full agentic workflow and a Claude Code-like experience.
3234
- **Terminal-first, IDE-friendly**: built for developers who live in the command line, with optional integration for VS Code, Zed, and JetBrains IDEs.
3335

36+
![](https://gw.alicdn.com/imgextra/i1/O1CN01D2DviS1wwtEtMwIzJ_!!6000000006373-2-tps-1600-900.png)
37+
3438
## Installation
3539

3640
### Quick Install (Recommended)
@@ -148,8 +152,8 @@ Here is a complete example:
148152
"modelProviders": {
149153
"openai": [
150154
{
151-
"id": "qwen3-coder-plus",
152-
"name": "qwen3-coder-plus",
155+
"id": "qwen3.6-plus",
156+
"name": "qwen3.6-plus",
153157
"baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
154158
"description": "Qwen3-Coder via Dashscope",
155159
"envKey": "DASHSCOPE_API_KEY"
@@ -165,7 +169,7 @@ Here is a complete example:
165169
}
166170
},
167171
"model": {
168-
"name": "qwen3-coder-plus"
172+
"name": "qwen3.6-plus"
169173
}
170174
}
171175
```
@@ -175,7 +179,7 @@ Here is a complete example:
175179
| Field | What it does |
176180
| ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
177181
| `modelProviders` | Declares which models are available and how to connect to them. Keys like `openai`, `anthropic`, `gemini` represent the API protocol. |
178-
| `modelProviders[].id` | The model ID sent to the API (e.g. `qwen3-coder-plus`, `gpt-4o`). |
182+
| `modelProviders[].id` | The model ID sent to the API (e.g. `qwen3.6-plus`, `gpt-4o`). |
179183
| `modelProviders[].envKey` | The name of the environment variable that holds your API key. |
180184
| `modelProviders[].baseUrl` | The API endpoint URL (required for non-default endpoints). |
181185
| `env` | A fallback place to store API keys (lowest priority; prefer `.env` files or `export` for sensitive keys). |
@@ -200,29 +204,17 @@ Use the `/model` command at any time to switch between all configured models.
200204
"modelProviders": {
201205
"openai": [
202206
{
203-
"id": "qwen3.5-plus",
204-
"name": "qwen3.5-plus (Coding Plan)",
207+
"id": "qwen3.6-plus",
208+
"name": "qwen3.6-plus (Coding Plan)",
205209
"baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
206-
"description": "qwen3.5-plus with thinking enabled from ModelStudio Coding Plan",
207-
"envKey": "BAILIAN_CODING_PLAN_API_KEY",
208-
"generationConfig": {
209-
"extra_body": {
210-
"enable_thinking": true
211-
}
212-
}
213-
},
214-
{
215-
"id": "qwen3-coder-plus",
216-
"name": "qwen3-coder-plus (Coding Plan)",
217-
"baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
218-
"description": "qwen3-coder-plus from ModelStudio Coding Plan",
210+
"description": "qwen3.6-plus from ModelStudio Coding Plan",
219211
"envKey": "BAILIAN_CODING_PLAN_API_KEY"
220212
},
221213
{
222-
"id": "qwen3-coder-next",
223-
"name": "qwen3-coder-next (Coding Plan)",
214+
"id": "qwen3.5-plus",
215+
"name": "qwen3.5-plus (Coding Plan)",
224216
"baseUrl": "https://coding.dashscope.aliyuncs.com/v1",
225-
"description": "qwen3-coder-next with thinking enabled from ModelStudio Coding Plan",
217+
"description": "qwen3.5-plus with thinking enabled from ModelStudio Coding Plan",
226218
"envKey": "BAILIAN_CODING_PLAN_API_KEY",
227219
"generationConfig": {
228220
"extra_body": {
@@ -265,7 +257,7 @@ Use the `/model` command at any time to switch between all configured models.
265257
}
266258
},
267259
"model": {
268-
"name": "qwen3-coder-plus"
260+
"name": "qwen3.6-plus"
269261
}
270262
}
271263
```

packages/cli/src/i18n/locales/de.js

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1336,8 +1336,8 @@ export default {
13361336
unknown: 'unbekannt',
13371337
"Failed to switch model to '{{modelId}}'.\n\n{{error}}":
13381338
"Modell konnte nicht auf '{{modelId}}' umgestellt werden.\n\n{{error}}",
1339-
'Qwen 3.5 Plus — efficient hybrid model with leading coding performance':
1340-
'Qwen 3.5 Plus — effizientes Hybridmodell mit führender Programmierleistung',
1339+
'Qwen 3.6 Plus — efficient hybrid model with leading coding performance':
1340+
'Qwen 3.6 Plus — effizientes Hybridmodell mit führender Programmierleistung',
13411341
'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)':
13421342
'Das neueste Qwen Vision Modell von Alibaba Cloud ModelStudio (Version: qwen3-vl-plus-2025-09-23)',
13431343

packages/cli/src/i18n/locales/en.js

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1385,8 +1385,8 @@ export default {
13851385
unknown: 'unknown',
13861386
"Failed to switch model to '{{modelId}}'.\n\n{{error}}":
13871387
"Failed to switch model to '{{modelId}}'.\n\n{{error}}",
1388-
'Qwen 3.5 Plus — efficient hybrid model with leading coding performance':
1389-
'Qwen 3.5 Plus — efficient hybrid model with leading coding performance',
1388+
'Qwen 3.6 Plus — efficient hybrid model with leading coding performance':
1389+
'Qwen 3.6 Plus — efficient hybrid model with leading coding performance',
13901390
'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)':
13911391
'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)',
13921392

packages/cli/src/i18n/locales/ja.js

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1026,8 +1026,8 @@ export default {
10261026
'not set': '未設定',
10271027
none: 'なし',
10281028
unknown: '不明',
1029-
'Qwen 3.5 Plus — efficient hybrid model with leading coding performance':
1030-
'Qwen 3.5 Plus — 効率的なハイブリッドモデル、業界トップクラスのコーディング性能',
1029+
'Qwen 3.6 Plus — efficient hybrid model with leading coding performance':
1030+
'Qwen 3.6 Plus — 効率的なハイブリッドモデル、業界トップクラスのコーディング性能',
10311031
'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)':
10321032
'Alibaba Cloud ModelStudioの最新Qwen Visionモデル(バージョン: qwen3-vl-plus-2025-09-23)',
10331033
// Dialogs - Permissions

packages/cli/src/i18n/locales/pt.js

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1339,8 +1339,8 @@ export default {
13391339
unknown: 'desconhecido',
13401340
"Failed to switch model to '{{modelId}}'.\n\n{{error}}":
13411341
"Falha ao trocar o modelo para '{{modelId}}'.\n\n{{error}}",
1342-
'Qwen 3.5 Plus — efficient hybrid model with leading coding performance':
1343-
'Qwen 3.5 Plus — modelo híbrido eficiente com desempenho líder em programação',
1342+
'Qwen 3.6 Plus — efficient hybrid model with leading coding performance':
1343+
'Qwen 3.6 Plus — modelo híbrido eficiente com desempenho líder em programação',
13441344
'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)':
13451345
'O modelo Qwen Vision mais recente do Alibaba Cloud ModelStudio (versão: qwen3-vl-plus-2025-09-23)',
13461346

packages/cli/src/i18n/locales/ru.js

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1262,8 +1262,8 @@ export default {
12621262
unknown: 'неизвестно',
12631263
"Failed to switch model to '{{modelId}}'.\n\n{{error}}":
12641264
"Не удалось переключиться на модель '{{modelId}}'.\n\n{{error}}",
1265-
'Qwen 3.5 Plus — efficient hybrid model with leading coding performance':
1266-
'Qwen 3.5 Plus — эффективная гибридная модель с лидирующей производительностью в программировании',
1265+
'Qwen 3.6 Plus — efficient hybrid model with leading coding performance':
1266+
'Qwen 3.6 Plus — эффективная гибридная модель с лидирующей производительностью в программировании',
12671267
'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)':
12681268
'Последняя модель Qwen Vision от Alibaba Cloud ModelStudio (версия: qwen3-vl-plus-2025-09-23)',
12691269

packages/cli/src/i18n/locales/zh.js

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1309,8 +1309,8 @@ export default {
13091309
unknown: '未知',
13101310
"Failed to switch model to '{{modelId}}'.\n\n{{error}}":
13111311
"无法切换到模型 '{{modelId}}'.\n\n{{error}}",
1312-
'Qwen 3.5 Plus — efficient hybrid model with leading coding performance':
1313-
'Qwen 3.5 Plus — 高效混合架构,编程性能业界领先',
1312+
'Qwen 3.6 Plus — efficient hybrid model with leading coding performance':
1313+
'Qwen 3.6 Plus — 高效混合架构,编程性能业界领先',
13141314
'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)':
13151315
'来自阿里云 ModelStudio 的最新 Qwen Vision 模型(版本:qwen3-vl-plus-2025-09-23)',
13161316

packages/core/src/core/openaiContentGenerator/provider/dashscope.test.ts

Lines changed: 4 additions & 68 deletions
Original file line numberDiff line numberDiff line change
@@ -762,19 +762,7 @@ describe('DashScopeOpenAICompatibleProvider', () => {
762762

763763
const result = provider.buildRequest(request, 'test-prompt-id');
764764

765-
expect(result.max_tokens).toBe(65536); // Should be limited to model's output limit (64K)
766-
});
767-
768-
it('should limit max_tokens when it exceeds model limit for qwen-vl-max-latest', () => {
769-
const request: OpenAI.Chat.ChatCompletionCreateParams = {
770-
model: 'qwen-vl-max-latest',
771-
messages: [{ role: 'user', content: 'Hello' }],
772-
max_tokens: 20000, // Exceeds the 8192 limit
773-
};
774-
775-
const result = provider.buildRequest(request, 'test-prompt-id');
776-
777-
expect(result.max_tokens).toBe(8192); // Should be limited to model's output limit
765+
expect(result.max_tokens).toBe(32768); // Should be limited to model's output limit (32K)
778766
});
779767

780768
it('should not modify max_tokens when it is within model limit', () => {
@@ -799,7 +787,7 @@ describe('DashScopeOpenAICompatibleProvider', () => {
799787
const result = provider.buildRequest(request, 'test-prompt-id');
800788

801789
// Should set conservative default (min of model limit and DEFAULT_OUTPUT_TOKEN_LIMIT)
802-
// qwen3-max has 64K output limit, so min(64K, 32K) = 32K
790+
// qwen3-max has 32K output limit, so min(32K, 32K) = 32K
803791
expect(result.max_tokens).toBe(32000);
804792
});
805793

@@ -845,7 +833,7 @@ describe('DashScopeOpenAICompatibleProvider', () => {
845833
const result = provider.buildRequest(request, 'test-prompt-id');
846834

847835
// max_tokens should be limited
848-
expect(result.max_tokens).toBe(65536); // Limited to model's output limit (64K)
836+
expect(result.max_tokens).toBe(32768); // Limited to model's output limit (32K)
849837

850838
// Other parameters should be preserved
851839
expect(result.temperature).toBe(0.8);
@@ -856,58 +844,6 @@ describe('DashScopeOpenAICompatibleProvider', () => {
856844
expect(result.user).toBe('test-user');
857845
});
858846

859-
it('should work with vision models and output token limits', () => {
860-
const request: OpenAI.Chat.ChatCompletionCreateParams = {
861-
model: 'qwen-vl-max-latest',
862-
messages: [
863-
{
864-
role: 'user',
865-
content: [
866-
{ type: 'text', text: 'Look at this image:' },
867-
{
868-
type: 'image_url',
869-
image_url: { url: 'https://example.com/image.jpg' },
870-
},
871-
],
872-
},
873-
],
874-
max_tokens: 20000, // Exceeds the model's output limit
875-
};
876-
877-
const result = provider.buildRequest(request, 'test-prompt-id');
878-
879-
expect(result.max_tokens).toBe(8192); // Should be limited to model's output limit (8K)
880-
expect(
881-
(result as { vl_high_resolution_images?: boolean })
882-
.vl_high_resolution_images,
883-
).toBe(true); // Vision-specific parameter should be preserved
884-
});
885-
886-
it('should set high resolution flag for qwen3-vl-plus', () => {
887-
const request: OpenAI.Chat.ChatCompletionCreateParams = {
888-
model: 'qwen3-vl-plus',
889-
messages: [
890-
{
891-
role: 'user',
892-
content: [
893-
{ type: 'text', text: 'Please inspect the image.' },
894-
{
895-
type: 'image_url',
896-
image_url: { url: 'https://example.com/vl.jpg' },
897-
},
898-
],
899-
},
900-
],
901-
};
902-
903-
const result = provider.buildRequest(request, 'test-prompt-id');
904-
905-
expect(
906-
(result as { vl_high_resolution_images?: boolean })
907-
.vl_high_resolution_images,
908-
).toBe(true);
909-
});
910-
911847
it('should set high resolution flag for the coder-model model', () => {
912848
const request: OpenAI.Chat.ChatCompletionCreateParams = {
913849
model: 'coder-model',
@@ -945,7 +881,7 @@ describe('DashScopeOpenAICompatibleProvider', () => {
945881

946882
const result = provider.buildRequest(request, 'test-prompt-id');
947883

948-
expect(result.max_tokens).toBe(65536); // Should be limited to model's output limit (64K)
884+
expect(result.max_tokens).toBe(32768); // Should be limited to model's output limit (32K)
949885
expect(result.stream).toBe(true); // Streaming should be preserved
950886
});
951887

packages/core/src/core/openaiContentGenerator/provider/dashscope.ts

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -150,9 +150,7 @@ export class DashScopeOpenAICompatibleProvider extends DefaultOpenAICompatiblePr
150150
}
151151

152152
override getDefaultGenerationConfig(): GenerateContentConfig {
153-
return {
154-
temperature: 0.3,
155-
};
153+
return {};
156154
}
157155

158156
/**

packages/core/src/core/tokenLimits.test.ts

Lines changed: 7 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -270,14 +270,11 @@ describe('tokenLimit with output type', () => {
270270
describe('Qwen output limits', () => {
271271
it('should return correct output limits for Qwen models', () => {
272272
expect(tokenLimit('qwen3.5-plus', 'output')).toBe(65536);
273-
expect(tokenLimit('qwen3-max', 'output')).toBe(65536);
274-
expect(tokenLimit('qwen3-max-2026-01-23', 'output')).toBe(65536);
273+
expect(tokenLimit('qwen3.6-plus', 'output')).toBe(65536);
275274
expect(tokenLimit('coder-model', 'output')).toBe(65536);
276-
// Models without specific output limits fall back to default
277-
expect(tokenLimit('qwen3-coder-plus', 'output')).toBe(8192);
278-
expect(tokenLimit('qwen3-coder-next', 'output')).toBe(8192);
279-
expect(tokenLimit('qwen3-vl-plus', 'output')).toBe(8192);
280-
expect(tokenLimit('qwen-vl-max-latest', 'output')).toBe(8192);
275+
// Models without specific output limits fall back to Qwen default (32K)
276+
expect(tokenLimit('qwen3-max', 'output')).toBe(32768);
277+
expect(tokenLimit('qwen3-max-2026-01-23', 'output')).toBe(32768);
281278
});
282279
});
283280

@@ -314,7 +311,7 @@ describe('tokenLimit with output type', () => {
314311
describe('input vs output comparison', () => {
315312
it('should return different limits for input vs output', () => {
316313
expect(tokenLimit('qwen3-max', 'input')).toBe(262144);
317-
expect(tokenLimit('qwen3-max', 'output')).toBe(65536);
314+
expect(tokenLimit('qwen3-max', 'output')).toBe(32768);
318315
});
319316

320317
it('should default to input type when no type is specified', () => {
@@ -325,9 +322,8 @@ describe('tokenLimit with output type', () => {
325322

326323
describe('normalization with output limits', () => {
327324
it('should handle normalized model names for output limits', () => {
328-
expect(tokenLimit('QWEN3-MAX', 'output')).toBe(65536);
329-
expect(tokenLimit('qwen3-max-20250601', 'output')).toBe(65536);
330-
expect(tokenLimit('QWEN-VL-MAX-LATEST', 'output')).toBe(8192);
325+
expect(tokenLimit('QWEN3-MAX', 'output')).toBe(32768);
326+
expect(tokenLimit('qwen3-max-20250601', 'output')).toBe(32768);
331327
});
332328
});
333329
});

0 commit comments

Comments
 (0)