Skip to content

[doc] chore: Bug fixes for the qwen3-235b model in 256k scenarios#5908

Merged
wucong25 merged 4 commits intoverl-project:mainfrom
autbuster:abc
Apr 8, 2026
Merged

[doc] chore: Bug fixes for the qwen3-235b model in 256k scenarios#5908
wucong25 merged 4 commits intoverl-project:mainfrom
autbuster:abc

Conversation

@autbuster
Copy link
Copy Markdown
Contributor

@autbuster autbuster commented Apr 8, 2026

What does this PR do?

fixes for the qwen3-235b model in 256k scenarios

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward, fully_async, one_step_off
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the Qwen3-235B training script to target the Instruct model and GSM8K dataset, alongside adjustments to Megatron parallelism, pipeline layer configurations, and optimizer hyperparameters. A critical configuration error was identified where the expert parallel size (train_ep=16) exceeds the calculated data parallel size (8) for the default 16-node setup, which will cause initialization failure.

# Actor Model Configuration
ACTOR_ARGS=(
actor_rollout_ref.actor.strategy=megatron
# actor_rollout_ref.actor.strategy=megatron
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

注释删掉

wucong25
wucong25 previously approved these changes Apr 8, 2026
@autbuster autbuster changed the title [fix] fix: Bug fixes for the qwen3-235b model in 256k scenarios [doc] chore: Bug fixes for the qwen3-235b model in 256k scenarios Apr 8, 2026
@wucong25 wucong25 changed the title [doc] chore: Bug fixes for the qwen3-235b model in 256k scenarios [docs] chore: Bug fixes for the qwen3-235b model in 256k scenarios Apr 8, 2026
@wucong25 wucong25 changed the title [docs] chore: Bug fixes for the qwen3-235b model in 256k scenarios [doc] chore: Bug fixes for the qwen3-235b model in 256k scenarios Apr 8, 2026
@wucong25 wucong25 merged commit 3edc45a into verl-project:main Apr 8, 2026
5 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants