Skip to content

Allow reusing input data in TBE benchmark #3594

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

sryap
Copy link
Contributor

@sryap sryap commented Jan 21, 2025

Summary:
Add --num-requests in TBE's device benchmark to allow for input
batches reuse. By default, --num-requests is set to -1. In this
case, the benchmark will generate iters batches. If it is set, the
benchmark will generate num_requests batches. If this value is
smaller than iters, input batches will be reused (i.e., iter i
uses batch i % num_requests).

Differential Revision: D68340968

Summary:
Add `--num-requests` in TBE's `device` benchmark to allow for input
batches reuse.  By default, `--num-requests` is set to -1.  In this
case, the benchmark will generate `iters` batches.  If it is set, the
benchmark will generate `num_requests` batches.  If this value is
smaller than `iters`, input batches will be reused (i.e., iter `i`
uses batch `i % num_requests`).

Differential Revision: D68340968
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68340968

Copy link

netlify bot commented Jan 21, 2025

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit f40d831
🔍 Latest deploy log https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/67900fb2fc4a1e0008a9a9fe
😎 Deploy Preview https://deploy-preview-3594--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

sryap added a commit to sryap/FBGEMM that referenced this pull request Jan 21, 2025
Summary:
X-link: facebookresearch/FBGEMM#674


Add `--num-requests` in TBE's `device` benchmark to allow for input
batches reuse.  By default, `--num-requests` is set to -1.  In this
case, the benchmark will generate `iters` batches.  If it is set, the
benchmark will generate `num_requests` batches.  If this value is
smaller than `iters`, input batches will be reused (i.e., iter `i`
uses batch `i % num_requests`).

Differential Revision: D68340968
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 51d2be6.

q10 pushed a commit to q10/FBGEMM that referenced this pull request Apr 10, 2025
Summary:
Pull Request resolved: facebookresearch/FBGEMM#674

X-link: pytorch#3594

Add `--num-requests` in TBE's `device` benchmark to allow for input
batches reuse.  By default, `--num-requests` is set to -1.  In this
case, the benchmark will generate `iters` batches.  If it is set, the
benchmark will generate `num_requests` batches.  If this value is
smaller than `iters`, input batches will be reused (i.e., iter `i`
uses batch `i % num_requests`).

Reviewed By: gajjanag

Differential Revision: D68340968

fbshipit-source-id: fdae703ec499f3ba2656cba3a3b4967c684058f5
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants