Skip to content

Commit b45ee32

Browse files
authored
[megatron] fix: ValueError when unpacking preprocess_thd_engine result in router replay (verl-project#5891)
[megatron] fix: ValueError when unpacking preprocess_thd_engine result in router replay ### What does this PR do? Fixes `ValueError: too many values to unpack (expected 2) in merge_router_topk_indices` when using router replay (R2) with non-legacy workers. `preprocess_thd_engine` returns a 3-tuple `(tensor, PackedSeqParams, Optional[Tensor])` but the call site only unpacked 2 values. This fix is necessary to run R2 router replay with non-legacy engine workers, (in particular in fully async mode). ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: [preprocess_thd_engine](https://github.com/verl-project/verl/pulls?q=is%3Apr+preprocess_thd_engine) [R2](https://github.com/verl-project/verl/pulls?q=is%3Apr+R2) [router replay](https://github.com/verl-project/verl/pulls?q=is%3Apr+router+replay) - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`, `fully_async`, `one_step_off` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test This is a one-line fix matching the function's return type annotation. No existing tests cover this code path (router replay with nested tensors). The fix was validated by inspecting the `preprocess_thd_engine` signature at `verl/models/mcore/util.py` which returns `tuple[torch.Tensor, PackedSeqParams, Optional[torch.Tensor]]`. ### API and Usage Example No API changes. ### Design & Code Changes No design changes, just a one-line fix. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [x] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). (not applicable) - [x] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: just a bug fix - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) - [x] If your PR is related to the `recipe` submodule, please also update the reference to the submodule commit via `git submodule update --remote` or `cd recipe && git pull origin main`. (not applicable)
1 parent f92bc5c commit b45ee32

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

verl/utils/megatron/router_replay_utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -253,7 +253,7 @@ def merge_router_topk_indices(attention_mask, input_ids, mini_layer_topk_idx_lis
253253

254254
if input_ids.is_nested:
255255
batch_size = input_ids.shape[0]
256-
_, packed_seq_params = preprocess_thd_engine(input_ids, pre_process=True)
256+
_, packed_seq_params, _ = preprocess_thd_engine(input_ids, pre_process=True)
257257
layers_topk_idx = postprocess_thd_engine(
258258
layers_topk_idx, packed_seq_params, input_ids, batch_size, post_process=True
259259
)

0 commit comments

Comments
 (0)