Skip to content

[Bugfix] Enable step-wise execution#81

Open
knlnguyen1802 wants to merge 1 commit into
verl-project:mainfrom
knlnguyen1802:stepwise_fix
Open

[Bugfix] Enable step-wise execution#81
knlnguyen1802 wants to merge 1 commit into
verl-project:mainfrom
knlnguyen1802:stepwise_fix

Conversation

@knlnguyen1802
Copy link
Copy Markdown
Collaborator

@knlnguyen1802 knlnguyen1802 commented May 14, 2026

What does this PR do?

Temporary fix to enable stepwise execution for vllm-omni

A temporary fix to enable stepwise execution

Add concise overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, vllm_omni, rollout, trainer, ci, training_utils, recipe, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward, diffusion, omni, tests, docker
    • If this PR involves multiple modules, separate them with , like [diffusion, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][diffusion, fsdp] feat: new rollout scheduler

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

  • Read the Contribute Guide.
  • Apply pre-commit checks: pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always
  • Add / Update the documentation.
  • Add unit or end-to-end test(s) to the CI workflow to cover all the code. If not feasible, explain why: ...

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for pre-tokenized prompt IDs and step-wise execution within the vllm_omni_rollout_adapter.py. Key additions include the _extract_prompt_ids, _tokenize_text_prompt, and prepare_encode methods, which facilitate the initialization of diffusion states from tokenized inputs. Additionally, the async server configuration now enables step_execution and sets a maximum sequence limit. Feedback indicates that since step_execution is enabled, the adapter must also override the per-step execution methods to ensure RL-specific fields are correctly collected. Other suggestions include correcting a class name typo in a ValueError and utilizing the build_img_shapes helper function to reduce code duplication.

).to(self.device)
return tokens.input_ids, tokens.attention_mask

def prepare_encode(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

To fully support step-wise execution for RL rollouts, this class likely needs to override the per-step execution method (e.g., execute_step or step). The current implementation overrides diffuse to collect all_log_probs and all_latents during the denoising loop. If step_execution is enabled in the engine, the engine will bypass diffuse and call the per-step method instead. Without an override that performs similar data collection and state updates (like incrementing state.step_index), these RL-specific fields will be missing from the final output.

Comment on lines +214 to +217
raise ValueError(
"QwenImagePipelineWithLogProbForTest.prepare_encode requires either "
"'prompt_ids' or a text 'prompt' in state.prompts[0]."
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error message contains a typo in the class name, referring to QwenImagePipelineWithLogProbForTest instead of QwenImagePipelineWithLogProb.

Suggested change
raise ValueError(
"QwenImagePipelineWithLogProbForTest.prepare_encode requires either "
"'prompt_ids' or a text 'prompt' in state.prompts[0]."
)
raise ValueError(
"QwenImagePipelineWithLogProb.prepare_encode requires either "
"'prompt_ids' or a text 'prompt' in state.prompts[0]."
)

None,
)

img_shapes = [[(1, height // self.vae_scale_factor // 2, width // self.vae_scale_factor // 2)]] * batch_size
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logic for building img_shapes is already implemented in the build_img_shapes utility function in common.py. It is better to use the utility to avoid code duplication and ensure consistency.

Suggested change
img_shapes = [[(1, height // self.vae_scale_factor // 2, width // self.vae_scale_factor // 2)]] * batch_size
img_shapes = build_img_shapes(height, width, batch_size, self.vae_scale_factor)
References
  1. Avoid code duplication by reusing existing helper functions for common logic, such as constructing image shapes.

Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really understand why we need to add so many things to make it work..

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main problem is vllm-omni does not official support passing prompt_token_ids as input.
The main logic of pipeline still try to tokenize the prompt.
That why the custom pipeline need to align with new function prepare_encode added by step-wise function.
To make this stable compatible, support from vllm-omni side is better

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. So vllm-omni does not support truly --skip-tokenizer-init in https://docs.vllm.ai/en/stable/configuration/engine_args/#modelconfig to accept prompt_token_ids.

Can we make a feature request for this?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I will work on it

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it conflict with the change with #66? which rename prompt_ids → prompt_token_ids for vllm-omni 0.20+

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants