[Train] Add PyTorch local mode support for multi-process training with torchrun#56218
Conversation
Signed-off-by: xgui <xgui@anyscale.com>
Signed-off-by: xgui <xgui@anyscale.com>
| logger = logging.getLogger(__name__) | ||
|
|
||
|
|
||
| def is_torch_dist_env_set() -> bool: |
There was a problem hiding this comment.
nit: is_torch_distributed_env_vars_set() to align with https://github.com/ray-project/ray/blob/master/python/ray/train/torch/config.py#L143 ?
is the CUDA_VISIBLE_DEVICES env var needed to be set for cuda env?
There was a problem hiding this comment.
Good call. Let me change to align.
There was a problem hiding this comment.
After second thought, I think current env variables are the minimum requirement for processes. to communicate with each other.
python/ray/train/v2/_internal/execution/local_mode_torch_utils.py
Outdated
Show resolved
Hide resolved
| assert os.environ["LOCAL_WORLD_SIZE"] == str( | ||
| world_size | ||
| ), "Local mode only supports 1 node, LOCAL_WORLD_SIZE should be equal to WORLD_SIZE." |
There was a problem hiding this comment.
I don't think this is actually correct. In the example in the PR description, shouldn't LOCAL_WORLD_SIZE be 4?
# Option 2: Multi-process local mode with torchrun
RAY_TRAIN_V2_ENABLED=1 torchrun --standalone --nnodes=1 --nproc-per-node=4 test.py
There was a problem hiding this comment.
Hmm, isn't LOCAL_WORLD_SIZE == world_size == 4?
There was a problem hiding this comment.
Oh you are right... I mixed it up.
Do we want to support multiple nodes?
E.g.
RAY_TRAIN_V2_ENABLED=1 torchrun --standalone --nnodes=4 --nproc-per-node=1 test.py
There was a problem hiding this comment.
I see in the comment it does say "Local mode only supports 1 node", but is this intentional? Why not support multiple?
There was a problem hiding this comment.
Good point. Actually we can support multi nodes.
| if dist.is_initialized(): | ||
| dist.destroy_process_group() |
There was a problem hiding this comment.
nit: I don't think this is actually needed for torchrun because the process gets shut down. Can perhaps remove it for now and add back if we identify issues without it.
There was a problem hiding this comment.
This is not needed in real run. But it is needed in pytest, where different tests run in the same process.
Signed-off-by: xgui <xgui@anyscale.com>
…l run in the same process Signed-off-by: xgui <xgui@anyscale.com>
matthewdeng
left a comment
There was a problem hiding this comment.
Mostly lgtm, add tests?
For the torch change, it is already captured inside the test for torch trainer inside For the multi-processes change, it is hard to test inside unittest. So I am planning to add release tests. |
Co-authored-by: matthewdeng <matthew.j.deng@gmail.com> Signed-off-by: Xinyuan <43737116+xinyuangui2@users.noreply.github.com>
Signed-off-by: xgui <xgui@anyscale.com>
…h torchrun (ray-project#56218) This PR extends the Ray Train v2 local mode support (from ray-project#55487) to enable users to launch multiple local mode processes using torchrun for PyTorch distributed training. **With this new feature, users can easily switch between torchrun and Ray Train without modifying their training code.** <img width="1249" height="811" alt="image" src="https://github.com/user-attachments/assets/5d998b5e-8f58-425a-b535-d4f4d0b64a5c" /> ### Note Ray data on multiple processes is not supported. Might need to wait for ray-project#55114 or similar components. ## Key Changes ### Multi-Process Local Mode Support - **`LocalTorchController`**: New controller that detects torchrun env variables and sets contexts accordingly - **Torchrun Integration**: Users can now launch multiple local mode processes using `torchrun` command - **Environment Detection**: Automatically detects torchrun environment variables and initializes distributed training ## Usage Example ```python import os import tempfile import torch from torch.nn import CrossEntropyLoss from torch.optim import Adam from torch.utils.data import DataLoader from torchvision.models import resnet18 from torchvision.datasets import FashionMNIST from torchvision.transforms import ToTensor, Normalize, Compose import ray from ray.train import Checkpoint, CheckpointConfig, RunConfig, ScalingConfig from ray.train.torch import TorchTrainer from ray.train.v2.api.config import FailureConfig import ray.train.torch def train_func(): # Model, Loss, Optimizer model = resnet18(num_classes=10) model.conv1 = torch.nn.Conv2d( 1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False ) # [1] Prepare model. model = ray.train.torch.prepare_model(model) criterion = CrossEntropyLoss() optimizer = Adam(model.parameters(), lr=0.001) # Data transform = Compose([ToTensor(), Normalize((0.28604,), (0.32025,))]) data_dir = os.path.join(tempfile.gettempdir(), "data") train_data = FashionMNIST(root=data_dir, train=True, download=True, transform=transform) train_loader = DataLoader(train_data, batch_size=128, shuffle=True) # [2] Prepare dataloader. train_loader = ray.train.torch.prepare_data_loader(train_loader) # Training for epoch in range(10): if ray.train.get_context().get_world_size() > 1: train_loader.sampler.set_epoch(epoch) for images, labels in train_loader: outputs = model(images) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() # [3] Report metrics and checkpoint. metrics = {"loss": loss.item(), "epoch": epoch} with tempfile.TemporaryDirectory() as temp_checkpoint_dir: torch.save( model.state_dict(), os.path.join(temp_checkpoint_dir, "model.pt") ) ray.train.report( metrics, checkpoint=ray.train.Checkpoint.from_directory(temp_checkpoint_dir), ) if ray.train.get_context().get_world_rank() == 0: print(metrics) # Configuration for local mode use_gpu = True scaling_config = ScalingConfig(num_workers=0, use_gpu=use_gpu) # Local mode run_config = RunConfig(checkpoint_config=CheckpointConfig(num_to_keep=1)) # Note: Ray Data not supported with multiple processes in local mode # For multi-process training, use PyTorch DataLoader as shown above # Initialize the Trainer trainer = TorchTrainer( train_loop_per_worker=train_func, scaling_config=scaling_config, run_config=run_config, ) # Train the model result = trainer.fit() ``` ### Running Options: ```bash # Option 1: Single process local mode RAY_TRAIN_V2_ENABLED=1 python test.py # Option 2: Multi-process local mode with torchrun RAY_TRAIN_V2_ENABLED=1 torchrun --standalone --nnodes=1 --nproc-per-node=4 test.py # Option 3: Switch to distributed Ray Train (change num_workers=4) # Same training code works across all modes! ``` --------- Signed-off-by: xgui <xgui@anyscale.com> Signed-off-by: Xinyuan <43737116+xinyuangui2@users.noreply.github.com> Co-authored-by: matthewdeng <matthew.j.deng@gmail.com> Signed-off-by: Zhiqiang Ma <zhiqiang.ma@intel.com>
…h torchrun (ray-project#56218) This PR extends the Ray Train v2 local mode support (from ray-project#55487) to enable users to launch multiple local mode processes using torchrun for PyTorch distributed training. **With this new feature, users can easily switch between torchrun and Ray Train without modifying their training code.** <img width="1249" height="811" alt="image" src="https://github.com/user-attachments/assets/5d998b5e-8f58-425a-b535-d4f4d0b64a5c" /> ### Note Ray data on multiple processes is not supported. Might need to wait for ray-project#55114 or similar components. ## Key Changes ### Multi-Process Local Mode Support - **`LocalTorchController`**: New controller that detects torchrun env variables and sets contexts accordingly - **Torchrun Integration**: Users can now launch multiple local mode processes using `torchrun` command - **Environment Detection**: Automatically detects torchrun environment variables and initializes distributed training ## Usage Example ```python import os import tempfile import torch from torch.nn import CrossEntropyLoss from torch.optim import Adam from torch.utils.data import DataLoader from torchvision.models import resnet18 from torchvision.datasets import FashionMNIST from torchvision.transforms import ToTensor, Normalize, Compose import ray from ray.train import Checkpoint, CheckpointConfig, RunConfig, ScalingConfig from ray.train.torch import TorchTrainer from ray.train.v2.api.config import FailureConfig import ray.train.torch def train_func(): # Model, Loss, Optimizer model = resnet18(num_classes=10) model.conv1 = torch.nn.Conv2d( 1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False ) # [1] Prepare model. model = ray.train.torch.prepare_model(model) criterion = CrossEntropyLoss() optimizer = Adam(model.parameters(), lr=0.001) # Data transform = Compose([ToTensor(), Normalize((0.28604,), (0.32025,))]) data_dir = os.path.join(tempfile.gettempdir(), "data") train_data = FashionMNIST(root=data_dir, train=True, download=True, transform=transform) train_loader = DataLoader(train_data, batch_size=128, shuffle=True) # [2] Prepare dataloader. train_loader = ray.train.torch.prepare_data_loader(train_loader) # Training for epoch in range(10): if ray.train.get_context().get_world_size() > 1: train_loader.sampler.set_epoch(epoch) for images, labels in train_loader: outputs = model(images) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() # [3] Report metrics and checkpoint. metrics = {"loss": loss.item(), "epoch": epoch} with tempfile.TemporaryDirectory() as temp_checkpoint_dir: torch.save( model.state_dict(), os.path.join(temp_checkpoint_dir, "model.pt") ) ray.train.report( metrics, checkpoint=ray.train.Checkpoint.from_directory(temp_checkpoint_dir), ) if ray.train.get_context().get_world_rank() == 0: print(metrics) # Configuration for local mode use_gpu = True scaling_config = ScalingConfig(num_workers=0, use_gpu=use_gpu) # Local mode run_config = RunConfig(checkpoint_config=CheckpointConfig(num_to_keep=1)) # Note: Ray Data not supported with multiple processes in local mode # For multi-process training, use PyTorch DataLoader as shown above # Initialize the Trainer trainer = TorchTrainer( train_loop_per_worker=train_func, scaling_config=scaling_config, run_config=run_config, ) # Train the model result = trainer.fit() ``` ### Running Options: ```bash # Option 1: Single process local mode RAY_TRAIN_V2_ENABLED=1 python test.py # Option 2: Multi-process local mode with torchrun RAY_TRAIN_V2_ENABLED=1 torchrun --standalone --nnodes=1 --nproc-per-node=4 test.py # Option 3: Switch to distributed Ray Train (change num_workers=4) # Same training code works across all modes! ``` --------- Signed-off-by: xgui <xgui@anyscale.com> Signed-off-by: Xinyuan <43737116+xinyuangui2@users.noreply.github.com> Co-authored-by: matthewdeng <matthew.j.deng@gmail.com> Signed-off-by: zac <zac@anyscale.com>
…h torchrun (#56218) This PR extends the Ray Train v2 local mode support (from #55487) to enable users to launch multiple local mode processes using torchrun for PyTorch distributed training. **With this new feature, users can easily switch between torchrun and Ray Train without modifying their training code.** <img width="1249" height="811" alt="image" src="https://github.com/user-attachments/assets/5d998b5e-8f58-425a-b535-d4f4d0b64a5c" /> ### Note Ray data on multiple processes is not supported. Might need to wait for #55114 or similar components. ## Key Changes ### Multi-Process Local Mode Support - **`LocalTorchController`**: New controller that detects torchrun env variables and sets contexts accordingly - **Torchrun Integration**: Users can now launch multiple local mode processes using `torchrun` command - **Environment Detection**: Automatically detects torchrun environment variables and initializes distributed training ## Usage Example ```python import os import tempfile import torch from torch.nn import CrossEntropyLoss from torch.optim import Adam from torch.utils.data import DataLoader from torchvision.models import resnet18 from torchvision.datasets import FashionMNIST from torchvision.transforms import ToTensor, Normalize, Compose import ray from ray.train import Checkpoint, CheckpointConfig, RunConfig, ScalingConfig from ray.train.torch import TorchTrainer from ray.train.v2.api.config import FailureConfig import ray.train.torch def train_func(): # Model, Loss, Optimizer model = resnet18(num_classes=10) model.conv1 = torch.nn.Conv2d( 1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False ) # [1] Prepare model. model = ray.train.torch.prepare_model(model) criterion = CrossEntropyLoss() optimizer = Adam(model.parameters(), lr=0.001) # Data transform = Compose([ToTensor(), Normalize((0.28604,), (0.32025,))]) data_dir = os.path.join(tempfile.gettempdir(), "data") train_data = FashionMNIST(root=data_dir, train=True, download=True, transform=transform) train_loader = DataLoader(train_data, batch_size=128, shuffle=True) # [2] Prepare dataloader. train_loader = ray.train.torch.prepare_data_loader(train_loader) # Training for epoch in range(10): if ray.train.get_context().get_world_size() > 1: train_loader.sampler.set_epoch(epoch) for images, labels in train_loader: outputs = model(images) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() # [3] Report metrics and checkpoint. metrics = {"loss": loss.item(), "epoch": epoch} with tempfile.TemporaryDirectory() as temp_checkpoint_dir: torch.save( model.state_dict(), os.path.join(temp_checkpoint_dir, "model.pt") ) ray.train.report( metrics, checkpoint=ray.train.Checkpoint.from_directory(temp_checkpoint_dir), ) if ray.train.get_context().get_world_rank() == 0: print(metrics) # Configuration for local mode use_gpu = True scaling_config = ScalingConfig(num_workers=0, use_gpu=use_gpu) # Local mode run_config = RunConfig(checkpoint_config=CheckpointConfig(num_to_keep=1)) # Note: Ray Data not supported with multiple processes in local mode # For multi-process training, use PyTorch DataLoader as shown above # Initialize the Trainer trainer = TorchTrainer( train_loop_per_worker=train_func, scaling_config=scaling_config, run_config=run_config, ) # Train the model result = trainer.fit() ``` ### Running Options: ```bash # Option 1: Single process local mode RAY_TRAIN_V2_ENABLED=1 python test.py # Option 2: Multi-process local mode with torchrun RAY_TRAIN_V2_ENABLED=1 torchrun --standalone --nnodes=1 --nproc-per-node=4 test.py # Option 3: Switch to distributed Ray Train (change num_workers=4) # Same training code works across all modes! ``` --------- Signed-off-by: xgui <xgui@anyscale.com> Signed-off-by: Xinyuan <43737116+xinyuangui2@users.noreply.github.com> Co-authored-by: matthewdeng <matthew.j.deng@gmail.com> Signed-off-by: elliot-barn <elliot.barnwell@anyscale.com>
…h torchrun (ray-project#56218) This PR extends the Ray Train v2 local mode support (from ray-project#55487) to enable users to launch multiple local mode processes using torchrun for PyTorch distributed training. **With this new feature, users can easily switch between torchrun and Ray Train without modifying their training code.** <img width="1249" height="811" alt="image" src="https://github.com/user-attachments/assets/5d998b5e-8f58-425a-b535-d4f4d0b64a5c" /> ### Note Ray data on multiple processes is not supported. Might need to wait for ray-project#55114 or similar components. ## Key Changes ### Multi-Process Local Mode Support - **`LocalTorchController`**: New controller that detects torchrun env variables and sets contexts accordingly - **Torchrun Integration**: Users can now launch multiple local mode processes using `torchrun` command - **Environment Detection**: Automatically detects torchrun environment variables and initializes distributed training ## Usage Example ```python import os import tempfile import torch from torch.nn import CrossEntropyLoss from torch.optim import Adam from torch.utils.data import DataLoader from torchvision.models import resnet18 from torchvision.datasets import FashionMNIST from torchvision.transforms import ToTensor, Normalize, Compose import ray from ray.train import Checkpoint, CheckpointConfig, RunConfig, ScalingConfig from ray.train.torch import TorchTrainer from ray.train.v2.api.config import FailureConfig import ray.train.torch def train_func(): # Model, Loss, Optimizer model = resnet18(num_classes=10) model.conv1 = torch.nn.Conv2d( 1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False ) # [1] Prepare model. model = ray.train.torch.prepare_model(model) criterion = CrossEntropyLoss() optimizer = Adam(model.parameters(), lr=0.001) # Data transform = Compose([ToTensor(), Normalize((0.28604,), (0.32025,))]) data_dir = os.path.join(tempfile.gettempdir(), "data") train_data = FashionMNIST(root=data_dir, train=True, download=True, transform=transform) train_loader = DataLoader(train_data, batch_size=128, shuffle=True) # [2] Prepare dataloader. train_loader = ray.train.torch.prepare_data_loader(train_loader) # Training for epoch in range(10): if ray.train.get_context().get_world_size() > 1: train_loader.sampler.set_epoch(epoch) for images, labels in train_loader: outputs = model(images) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() # [3] Report metrics and checkpoint. metrics = {"loss": loss.item(), "epoch": epoch} with tempfile.TemporaryDirectory() as temp_checkpoint_dir: torch.save( model.state_dict(), os.path.join(temp_checkpoint_dir, "model.pt") ) ray.train.report( metrics, checkpoint=ray.train.Checkpoint.from_directory(temp_checkpoint_dir), ) if ray.train.get_context().get_world_rank() == 0: print(metrics) # Configuration for local mode use_gpu = True scaling_config = ScalingConfig(num_workers=0, use_gpu=use_gpu) # Local mode run_config = RunConfig(checkpoint_config=CheckpointConfig(num_to_keep=1)) # Note: Ray Data not supported with multiple processes in local mode # For multi-process training, use PyTorch DataLoader as shown above # Initialize the Trainer trainer = TorchTrainer( train_loop_per_worker=train_func, scaling_config=scaling_config, run_config=run_config, ) # Train the model result = trainer.fit() ``` ### Running Options: ```bash # Option 1: Single process local mode RAY_TRAIN_V2_ENABLED=1 python test.py # Option 2: Multi-process local mode with torchrun RAY_TRAIN_V2_ENABLED=1 torchrun --standalone --nnodes=1 --nproc-per-node=4 test.py # Option 3: Switch to distributed Ray Train (change num_workers=4) # Same training code works across all modes! ``` --------- Signed-off-by: xgui <xgui@anyscale.com> Signed-off-by: Xinyuan <43737116+xinyuangui2@users.noreply.github.com> Co-authored-by: matthewdeng <matthew.j.deng@gmail.com> Signed-off-by: Marco Stephan <marco@magic.dev>
…h torchrun (#56218) This PR extends the Ray Train v2 local mode support (from #55487) to enable users to launch multiple local mode processes using torchrun for PyTorch distributed training. **With this new feature, users can easily switch between torchrun and Ray Train without modifying their training code.** <img width="1249" height="811" alt="image" src="https://github.com/user-attachments/assets/5d998b5e-8f58-425a-b535-d4f4d0b64a5c" /> ### Note Ray data on multiple processes is not supported. Might need to wait for #55114 or similar components. ## Key Changes ### Multi-Process Local Mode Support - **`LocalTorchController`**: New controller that detects torchrun env variables and sets contexts accordingly - **Torchrun Integration**: Users can now launch multiple local mode processes using `torchrun` command - **Environment Detection**: Automatically detects torchrun environment variables and initializes distributed training ## Usage Example ```python import os import tempfile import torch from torch.nn import CrossEntropyLoss from torch.optim import Adam from torch.utils.data import DataLoader from torchvision.models import resnet18 from torchvision.datasets import FashionMNIST from torchvision.transforms import ToTensor, Normalize, Compose import ray from ray.train import Checkpoint, CheckpointConfig, RunConfig, ScalingConfig from ray.train.torch import TorchTrainer from ray.train.v2.api.config import FailureConfig import ray.train.torch def train_func(): # Model, Loss, Optimizer model = resnet18(num_classes=10) model.conv1 = torch.nn.Conv2d( 1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False ) # [1] Prepare model. model = ray.train.torch.prepare_model(model) criterion = CrossEntropyLoss() optimizer = Adam(model.parameters(), lr=0.001) # Data transform = Compose([ToTensor(), Normalize((0.28604,), (0.32025,))]) data_dir = os.path.join(tempfile.gettempdir(), "data") train_data = FashionMNIST(root=data_dir, train=True, download=True, transform=transform) train_loader = DataLoader(train_data, batch_size=128, shuffle=True) # [2] Prepare dataloader. train_loader = ray.train.torch.prepare_data_loader(train_loader) # Training for epoch in range(10): if ray.train.get_context().get_world_size() > 1: train_loader.sampler.set_epoch(epoch) for images, labels in train_loader: outputs = model(images) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() # [3] Report metrics and checkpoint. metrics = {"loss": loss.item(), "epoch": epoch} with tempfile.TemporaryDirectory() as temp_checkpoint_dir: torch.save( model.state_dict(), os.path.join(temp_checkpoint_dir, "model.pt") ) ray.train.report( metrics, checkpoint=ray.train.Checkpoint.from_directory(temp_checkpoint_dir), ) if ray.train.get_context().get_world_rank() == 0: print(metrics) # Configuration for local mode use_gpu = True scaling_config = ScalingConfig(num_workers=0, use_gpu=use_gpu) # Local mode run_config = RunConfig(checkpoint_config=CheckpointConfig(num_to_keep=1)) # Note: Ray Data not supported with multiple processes in local mode # For multi-process training, use PyTorch DataLoader as shown above # Initialize the Trainer trainer = TorchTrainer( train_loop_per_worker=train_func, scaling_config=scaling_config, run_config=run_config, ) # Train the model result = trainer.fit() ``` ### Running Options: ```bash # Option 1: Single process local mode RAY_TRAIN_V2_ENABLED=1 python test.py # Option 2: Multi-process local mode with torchrun RAY_TRAIN_V2_ENABLED=1 torchrun --standalone --nnodes=1 --nproc-per-node=4 test.py # Option 3: Switch to distributed Ray Train (change num_workers=4) # Same training code works across all modes! ``` --------- Signed-off-by: xgui <xgui@anyscale.com> Signed-off-by: Xinyuan <43737116+xinyuangui2@users.noreply.github.com> Co-authored-by: matthewdeng <matthew.j.deng@gmail.com> Signed-off-by: elliot-barn <elliot.barnwell@anyscale.com>
…h torchrun (ray-project#56218) This PR extends the Ray Train v2 local mode support (from ray-project#55487) to enable users to launch multiple local mode processes using torchrun for PyTorch distributed training. **With this new feature, users can easily switch between torchrun and Ray Train without modifying their training code.** <img width="1249" height="811" alt="image" src="https://github.com/user-attachments/assets/5d998b5e-8f58-425a-b535-d4f4d0b64a5c" /> ### Note Ray data on multiple processes is not supported. Might need to wait for ray-project#55114 or similar components. ## Key Changes ### Multi-Process Local Mode Support - **`LocalTorchController`**: New controller that detects torchrun env variables and sets contexts accordingly - **Torchrun Integration**: Users can now launch multiple local mode processes using `torchrun` command - **Environment Detection**: Automatically detects torchrun environment variables and initializes distributed training ## Usage Example ```python import os import tempfile import torch from torch.nn import CrossEntropyLoss from torch.optim import Adam from torch.utils.data import DataLoader from torchvision.models import resnet18 from torchvision.datasets import FashionMNIST from torchvision.transforms import ToTensor, Normalize, Compose import ray from ray.train import Checkpoint, CheckpointConfig, RunConfig, ScalingConfig from ray.train.torch import TorchTrainer from ray.train.v2.api.config import FailureConfig import ray.train.torch def train_func(): # Model, Loss, Optimizer model = resnet18(num_classes=10) model.conv1 = torch.nn.Conv2d( 1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False ) # [1] Prepare model. model = ray.train.torch.prepare_model(model) criterion = CrossEntropyLoss() optimizer = Adam(model.parameters(), lr=0.001) # Data transform = Compose([ToTensor(), Normalize((0.28604,), (0.32025,))]) data_dir = os.path.join(tempfile.gettempdir(), "data") train_data = FashionMNIST(root=data_dir, train=True, download=True, transform=transform) train_loader = DataLoader(train_data, batch_size=128, shuffle=True) # [2] Prepare dataloader. train_loader = ray.train.torch.prepare_data_loader(train_loader) # Training for epoch in range(10): if ray.train.get_context().get_world_size() > 1: train_loader.sampler.set_epoch(epoch) for images, labels in train_loader: outputs = model(images) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() # [3] Report metrics and checkpoint. metrics = {"loss": loss.item(), "epoch": epoch} with tempfile.TemporaryDirectory() as temp_checkpoint_dir: torch.save( model.state_dict(), os.path.join(temp_checkpoint_dir, "model.pt") ) ray.train.report( metrics, checkpoint=ray.train.Checkpoint.from_directory(temp_checkpoint_dir), ) if ray.train.get_context().get_world_rank() == 0: print(metrics) # Configuration for local mode use_gpu = True scaling_config = ScalingConfig(num_workers=0, use_gpu=use_gpu) # Local mode run_config = RunConfig(checkpoint_config=CheckpointConfig(num_to_keep=1)) # Note: Ray Data not supported with multiple processes in local mode # For multi-process training, use PyTorch DataLoader as shown above # Initialize the Trainer trainer = TorchTrainer( train_loop_per_worker=train_func, scaling_config=scaling_config, run_config=run_config, ) # Train the model result = trainer.fit() ``` ### Running Options: ```bash # Option 1: Single process local mode RAY_TRAIN_V2_ENABLED=1 python test.py # Option 2: Multi-process local mode with torchrun RAY_TRAIN_V2_ENABLED=1 torchrun --standalone --nnodes=1 --nproc-per-node=4 test.py # Option 3: Switch to distributed Ray Train (change num_workers=4) # Same training code works across all modes! ``` --------- Signed-off-by: xgui <xgui@anyscale.com> Signed-off-by: Xinyuan <43737116+xinyuangui2@users.noreply.github.com> Co-authored-by: matthewdeng <matthew.j.deng@gmail.com> Signed-off-by: Douglas Strodtman <douglas@anyscale.com>
…h torchrun (ray-project#56218) This PR extends the Ray Train v2 local mode support (from ray-project#55487) to enable users to launch multiple local mode processes using torchrun for PyTorch distributed training. **With this new feature, users can easily switch between torchrun and Ray Train without modifying their training code.** <img width="1249" height="811" alt="image" src="https://github.com/user-attachments/assets/5d998b5e-8f58-425a-b535-d4f4d0b64a5c" /> ### Note Ray data on multiple processes is not supported. Might need to wait for ray-project#55114 or similar components. ## Key Changes ### Multi-Process Local Mode Support - **`LocalTorchController`**: New controller that detects torchrun env variables and sets contexts accordingly - **Torchrun Integration**: Users can now launch multiple local mode processes using `torchrun` command - **Environment Detection**: Automatically detects torchrun environment variables and initializes distributed training ## Usage Example ```python import os import tempfile import torch from torch.nn import CrossEntropyLoss from torch.optim import Adam from torch.utils.data import DataLoader from torchvision.models import resnet18 from torchvision.datasets import FashionMNIST from torchvision.transforms import ToTensor, Normalize, Compose import ray from ray.train import Checkpoint, CheckpointConfig, RunConfig, ScalingConfig from ray.train.torch import TorchTrainer from ray.train.v2.api.config import FailureConfig import ray.train.torch def train_func(): # Model, Loss, Optimizer model = resnet18(num_classes=10) model.conv1 = torch.nn.Conv2d( 1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False ) # [1] Prepare model. model = ray.train.torch.prepare_model(model) criterion = CrossEntropyLoss() optimizer = Adam(model.parameters(), lr=0.001) # Data transform = Compose([ToTensor(), Normalize((0.28604,), (0.32025,))]) data_dir = os.path.join(tempfile.gettempdir(), "data") train_data = FashionMNIST(root=data_dir, train=True, download=True, transform=transform) train_loader = DataLoader(train_data, batch_size=128, shuffle=True) # [2] Prepare dataloader. train_loader = ray.train.torch.prepare_data_loader(train_loader) # Training for epoch in range(10): if ray.train.get_context().get_world_size() > 1: train_loader.sampler.set_epoch(epoch) for images, labels in train_loader: outputs = model(images) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() # [3] Report metrics and checkpoint. metrics = {"loss": loss.item(), "epoch": epoch} with tempfile.TemporaryDirectory() as temp_checkpoint_dir: torch.save( model.state_dict(), os.path.join(temp_checkpoint_dir, "model.pt") ) ray.train.report( metrics, checkpoint=ray.train.Checkpoint.from_directory(temp_checkpoint_dir), ) if ray.train.get_context().get_world_rank() == 0: print(metrics) # Configuration for local mode use_gpu = True scaling_config = ScalingConfig(num_workers=0, use_gpu=use_gpu) # Local mode run_config = RunConfig(checkpoint_config=CheckpointConfig(num_to_keep=1)) # Note: Ray Data not supported with multiple processes in local mode # For multi-process training, use PyTorch DataLoader as shown above # Initialize the Trainer trainer = TorchTrainer( train_loop_per_worker=train_func, scaling_config=scaling_config, run_config=run_config, ) # Train the model result = trainer.fit() ``` ### Running Options: ```bash # Option 1: Single process local mode RAY_TRAIN_V2_ENABLED=1 python test.py # Option 2: Multi-process local mode with torchrun RAY_TRAIN_V2_ENABLED=1 torchrun --standalone --nnodes=1 --nproc-per-node=4 test.py # Option 3: Switch to distributed Ray Train (change num_workers=4) # Same training code works across all modes! ``` --------- Signed-off-by: xgui <xgui@anyscale.com> Signed-off-by: Xinyuan <43737116+xinyuangui2@users.noreply.github.com> Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
…h torchrun Original PR #56218 by xinyuangui2 Original: ray-project/ray#56218
…ti-process training with torchrun Merged from original PR #56218 Original: ray-project/ray#56218
…h torchrun (ray-project#56218) This PR extends the Ray Train v2 local mode support (from ray-project#55487) to enable users to launch multiple local mode processes using torchrun for PyTorch distributed training. **With this new feature, users can easily switch between torchrun and Ray Train without modifying their training code.** <img width="1249" height="811" alt="image" src="https://github.com/user-attachments/assets/5d998b5e-8f58-425a-b535-d4f4d0b64a5c" /> ### Note Ray data on multiple processes is not supported. Might need to wait for ray-project#55114 or similar components. ## Key Changes ### Multi-Process Local Mode Support - **`LocalTorchController`**: New controller that detects torchrun env variables and sets contexts accordingly - **Torchrun Integration**: Users can now launch multiple local mode processes using `torchrun` command - **Environment Detection**: Automatically detects torchrun environment variables and initializes distributed training ## Usage Example ```python import os import tempfile import torch from torch.nn import CrossEntropyLoss from torch.optim import Adam from torch.utils.data import DataLoader from torchvision.models import resnet18 from torchvision.datasets import FashionMNIST from torchvision.transforms import ToTensor, Normalize, Compose import ray from ray.train import Checkpoint, CheckpointConfig, RunConfig, ScalingConfig from ray.train.torch import TorchTrainer from ray.train.v2.api.config import FailureConfig import ray.train.torch def train_func(): # Model, Loss, Optimizer model = resnet18(num_classes=10) model.conv1 = torch.nn.Conv2d( 1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False ) # [1] Prepare model. model = ray.train.torch.prepare_model(model) criterion = CrossEntropyLoss() optimizer = Adam(model.parameters(), lr=0.001) # Data transform = Compose([ToTensor(), Normalize((0.28604,), (0.32025,))]) data_dir = os.path.join(tempfile.gettempdir(), "data") train_data = FashionMNIST(root=data_dir, train=True, download=True, transform=transform) train_loader = DataLoader(train_data, batch_size=128, shuffle=True) # [2] Prepare dataloader. train_loader = ray.train.torch.prepare_data_loader(train_loader) # Training for epoch in range(10): if ray.train.get_context().get_world_size() > 1: train_loader.sampler.set_epoch(epoch) for images, labels in train_loader: outputs = model(images) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() # [3] Report metrics and checkpoint. metrics = {"loss": loss.item(), "epoch": epoch} with tempfile.TemporaryDirectory() as temp_checkpoint_dir: torch.save( model.state_dict(), os.path.join(temp_checkpoint_dir, "model.pt") ) ray.train.report( metrics, checkpoint=ray.train.Checkpoint.from_directory(temp_checkpoint_dir), ) if ray.train.get_context().get_world_rank() == 0: print(metrics) # Configuration for local mode use_gpu = True scaling_config = ScalingConfig(num_workers=0, use_gpu=use_gpu) # Local mode run_config = RunConfig(checkpoint_config=CheckpointConfig(num_to_keep=1)) # Note: Ray Data not supported with multiple processes in local mode # For multi-process training, use PyTorch DataLoader as shown above # Initialize the Trainer trainer = TorchTrainer( train_loop_per_worker=train_func, scaling_config=scaling_config, run_config=run_config, ) # Train the model result = trainer.fit() ``` ### Running Options: ```bash # Option 1: Single process local mode RAY_TRAIN_V2_ENABLED=1 python test.py # Option 2: Multi-process local mode with torchrun RAY_TRAIN_V2_ENABLED=1 torchrun --standalone --nnodes=1 --nproc-per-node=4 test.py # Option 3: Switch to distributed Ray Train (change num_workers=4) # Same training code works across all modes! ``` --------- Signed-off-by: xgui <xgui@anyscale.com> Signed-off-by: Xinyuan <43737116+xinyuangui2@users.noreply.github.com> Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
…h torchrun (ray-project#56218) This PR extends the Ray Train v2 local mode support (from ray-project#55487) to enable users to launch multiple local mode processes using torchrun for PyTorch distributed training. **With this new feature, users can easily switch between torchrun and Ray Train without modifying their training code.** <img width="1249" height="811" alt="image" src="https://github.com/user-attachments/assets/5d998b5e-8f58-425a-b535-d4f4d0b64a5c" /> ### Note Ray data on multiple processes is not supported. Might need to wait for ray-project#55114 or similar components. ## Key Changes ### Multi-Process Local Mode Support - **`LocalTorchController`**: New controller that detects torchrun env variables and sets contexts accordingly - **Torchrun Integration**: Users can now launch multiple local mode processes using `torchrun` command - **Environment Detection**: Automatically detects torchrun environment variables and initializes distributed training ## Usage Example ```python import os import tempfile import torch from torch.nn import CrossEntropyLoss from torch.optim import Adam from torch.utils.data import DataLoader from torchvision.models import resnet18 from torchvision.datasets import FashionMNIST from torchvision.transforms import ToTensor, Normalize, Compose import ray from ray.train import Checkpoint, CheckpointConfig, RunConfig, ScalingConfig from ray.train.torch import TorchTrainer from ray.train.v2.api.config import FailureConfig import ray.train.torch def train_func(): # Model, Loss, Optimizer model = resnet18(num_classes=10) model.conv1 = torch.nn.Conv2d( 1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False ) # [1] Prepare model. model = ray.train.torch.prepare_model(model) criterion = CrossEntropyLoss() optimizer = Adam(model.parameters(), lr=0.001) # Data transform = Compose([ToTensor(), Normalize((0.28604,), (0.32025,))]) data_dir = os.path.join(tempfile.gettempdir(), "data") train_data = FashionMNIST(root=data_dir, train=True, download=True, transform=transform) train_loader = DataLoader(train_data, batch_size=128, shuffle=True) # [2] Prepare dataloader. train_loader = ray.train.torch.prepare_data_loader(train_loader) # Training for epoch in range(10): if ray.train.get_context().get_world_size() > 1: train_loader.sampler.set_epoch(epoch) for images, labels in train_loader: outputs = model(images) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() # [3] Report metrics and checkpoint. metrics = {"loss": loss.item(), "epoch": epoch} with tempfile.TemporaryDirectory() as temp_checkpoint_dir: torch.save( model.state_dict(), os.path.join(temp_checkpoint_dir, "model.pt") ) ray.train.report( metrics, checkpoint=ray.train.Checkpoint.from_directory(temp_checkpoint_dir), ) if ray.train.get_context().get_world_rank() == 0: print(metrics) # Configuration for local mode use_gpu = True scaling_config = ScalingConfig(num_workers=0, use_gpu=use_gpu) # Local mode run_config = RunConfig(checkpoint_config=CheckpointConfig(num_to_keep=1)) # Note: Ray Data not supported with multiple processes in local mode # For multi-process training, use PyTorch DataLoader as shown above # Initialize the Trainer trainer = TorchTrainer( train_loop_per_worker=train_func, scaling_config=scaling_config, run_config=run_config, ) # Train the model result = trainer.fit() ``` ### Running Options: ```bash # Option 1: Single process local mode RAY_TRAIN_V2_ENABLED=1 python test.py # Option 2: Multi-process local mode with torchrun RAY_TRAIN_V2_ENABLED=1 torchrun --standalone --nnodes=1 --nproc-per-node=4 test.py # Option 3: Switch to distributed Ray Train (change num_workers=4) # Same training code works across all modes! ``` --------- Signed-off-by: xgui <xgui@anyscale.com> Signed-off-by: Xinyuan <43737116+xinyuangui2@users.noreply.github.com> Co-authored-by: matthewdeng <matthew.j.deng@gmail.com> Signed-off-by: Future-Outlier <eric901201@gmail.com>

This PR extends the Ray Train v2 local mode support (from #55487) to enable users to launch multiple local mode processes using torchrun for PyTorch distributed training. With this new feature, users can easily switch between torchrun and Ray Train without modifying their training code.
Note
Ray data on multiple processes is not supported. Might need to wait for #55114 or similar components.
Key Changes
Multi-Process Local Mode Support
LocalTorchController: New controller that detects torchrun env variables and sets contexts accordinglytorchruncommandUsage Example
Running Options:
Checks
git commit -s) in this PR.scripts/format.shto lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/under thecorresponding
.rstfile.