Skip to content

Add drop_last option to DataLoader batching #3448

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

hollermay
Copy link

Introduces a drop_last flag to DataLoaderBuilder and FixBatchStrategy, allowing incomplete batches to be optionally dropped during data loading. Updates the builder API and batch strategy logic to support this feature, improving flexibility for batch processing.

Pull Request Template

Checklist

  • Confirmed that cargo run-checks command has been executed.
  • Made sure the book is up to date with changes in this PR.

Related Issues/PRs

Fixes issue: DataLoader yields as many iterations as num_workers instead of correct batch count; no drop_last support (see #3316)

Changes

Problem:
The DataLoader previously yielded one batch per worker per epoch, regardless of batch size or dataset size, leading to incorrect iteration counts. There was also no way to drop incomplete batches, unlike PyTorch’s DataLoader.

Solution:

  • Introduced a drop_last flag to DataLoaderBuilder and FixBatchStrategy.
  • Updated the builder API to allow users to set drop_last.
  • Modified the batch strategy logic so that, when drop_last is true, incomplete batches are dropped.
  • Refactored the multithreaded DataLoader to use a shared batch queue among workers, ensuring the number of iterations per epoch is determined by dataset size and batch size, not by num_workers.
  • This brings the DataLoader’s behavior in line with PyTorch and improves flexibility for batch processing.

Testing

  • Ran cargo test --workspace --all-features to ensure all tests pass.
  • Manually verified that:
    • The number of iterations per epoch matches ceil(dataset_size / batch_size) for various num_workers values.
    • The drop_last flag correctly drops incomplete batches when enabled.
    • Changing num_workers only affects parallelism, not batch count.

Introduces a drop_last flag to DataLoaderBuilder and FixBatchStrategy, allowing incomplete batches to be optionally dropped during data loading. Updates the builder API and batch strategy logic to support this feature, improving flexibility for batch processing.
@hollermay
Copy link
Author

@laggui kindly review

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant