Skip to content

Conversation

@gursimar
Copy link
Contributor

Description

This PR adds working, tested examples for running single-controller LoRA training with the vLLM backend.
It builds on the existing LoRA + vLLM support (RFC #609) and demonstrates how to configure and launch a single-controller GRPO workflow.


What’s included

  • New GRPO example for single-controller LoRA with vLLM
  • Corresponding YAML config illustrating minimal wiring
  • No changes to core engine or runtime behavior

Files changed

Kept files in the examples/lora folder on purpose as IMO, all lora exmaples should be under this forder only.

  • examples/lora/gsm8k_grpo_vllm_single_controller.py — single-controller GRPO LoRA example
  • examples/lora/gsm8k_grpo_vllm_single_controller.yaml — config for vLLM backend

Running instructions

python examples/lora/gsm8k_grpo_vllm_single_controller.py --config examples/lora/gsm8k_grpo_vllm_single_controller.yaml

Testing

  • Verified the example runs end-to-end with the vLLM backend.
  • No regressions to existing LoRA or non-LoRA flows.

Type of Change

  • New feature (non-breaking change that adds functionality)
  • Bug fix
  • Breaking change
  • Documentation update
  • Code refactoring
  • Performance improvement
  • Test coverage improvement

Checklist

  • I have read the Contributing Guide
  • I have run formatting tools (pre-commit or manual)
  • I have run relevant unit tests and they pass
  • I have added tests for new functionality
  • I have updated documentation if needed
  • My branch is up to date with main
  • This PR introduces breaking changes
  • No critical issues raised by AI reviewers (/gemini review)

Need help? Check the Contributing Guide or ask in
GitHub Discussions!

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @gursimar, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request delivers a practical and verified example for fine-tuning models using LoRA with a single controller and the vLLM inference engine. It serves as a blueprint for users looking to implement GRPO workflows with these specific technologies, providing both the Python script and the necessary YAML configuration to get started. The primary goal is to expand the existing LoRA + vLLM capabilities with a concrete, runnable demonstration.

Highlights

  • New Example for LoRA Fine-tuning: This pull request introduces a new, fully tested example demonstrating single-controller LoRA (Low-Rank Adaptation) training utilizing the vLLM backend.
  • GRPO Workflow Configuration: It provides a clear example of how to configure and launch a single-controller GRPO (Generalized Reinforcement Learning with Policy Optimization) workflow, building on existing LoRA + vLLM support.
  • Dedicated Configuration File: A corresponding YAML configuration file is included, illustrating the minimal wiring required to set up this specific LoRA fine-tuning scenario.
  • No Core Engine Changes: The changes are confined to examples and configuration, ensuring no modifications to the core engine or runtime behavior of the system.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new example for single-controller LoRA fine-tuning with the vLLM backend. The changes include a Python script for the training workflow and a corresponding YAML configuration file. The code is well-structured for an example script. My review includes a couple of suggestions for the Python script to improve maintainability by removing a magic number and to add a placeholder for an evaluation step, which seems intended by the configuration but is currently missing.

@gursimar gursimar force-pushed the single_controller_lora branch from 12d9725 to fd4fd16 Compare December 17, 2025 00:19
Copy link
Collaborator

@garrett4wade garrett4wade left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While the implementation looks great, I'd like to still confirm the details about learning performance.

The previous SPMD LoRA code has an unresolved bug that if multiple infernece engines submit rollout requests concurrently, the learning performance will significantly drop. As a workaround, we only submit requests on rank 0 (code). Only through this way the learning curve can basically match full-parameter tuning.

I wonder whether the bug still exists in the single controller mode. Can you provide learning curves comparing this new script with the default SPMD, full-parameter tuning script? Hopefully there is no performance drop any more.

Comment on lines +208 to +220
with stats_tracker.record_timing("save"):
saver.save(actor, epoch, step, global_step, tokenizer=tokenizer)

with stats_tracker.record_timing("checkpoint_for_recover"):
recover_handler.dump(
actor,
step_info,
saver,
evaluator,
stats_logger,
train_dataloader,
tokenizer=tokenizer,
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The single-controller training script has been slightly changed. It now has an additional clear_batch call. Please refer to the latest script for details.

FYI we are working to merging the scripts into trainers now.

Comment on lines +166 to +177
if config.rollout.max_head_offpolicyness > 0:
batch = actor.prepare_batch(
train_dataloader,
workflow="areal.workflow.rlvr.RLVRWorkflow",
workflow_kwargs=workflow_kwargs,
)
else:
batch = actor.rollout_batch(
next(data_generator),
workflow="areal.workflow.rlvr.RLVRWorkflow",
workflow_kwargs=workflow_kwargs,
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should only use prepare_batch. Please check the latest script in the main branch.

@gursimar gursimar force-pushed the single_controller_lora branch from fd4fd16 to 90b7da1 Compare December 17, 2025 23:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants