-
Notifications
You must be signed in to change notification settings - Fork 367
Unify llama generate utils #3318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3318
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
albanD
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me.
But I'll let an ao maintainer accept the PR
jerryzh168
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good, as long as it works
torchao/_models/llama/generate.py
Outdated
| if torch.xpu.is_available() | ||
| else "cpu" | ||
| ) | ||
| default_device = acc.type if (acc := torch.accelerator.current_accelerator(True)) else "cpu" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe spell out the arg of current_accelerator to be clearer https://docs.pytorch.org/docs/stable/generated/torch.accelerator.current_accelerator.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jerryzh168 Done.
57478ea to
09e0994
Compare
Motivation
Unify llama generate utils via
torch.acceleratorAPIs.cc @albanD