-
Notifications
You must be signed in to change notification settings - Fork 539
How original pytorch calls xla's ops? #1385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Yes, we implement an XLA ATEN backend. |
@alanzhai219, do you need any more info on this? |
Thanks |
@alanzhai219 For #2, for unsupported ops in XLA, we punt it back to CPU and then send back to TPU. This part of logic is auto generated, if you have a build locally, you can find in For #3, the tensorflow calls in XLA are expected since XLA has some interfaces provided to TF. And PyTorch/XLA might reuse the same interface if applicable. Hope this is helpful. Also I'm curious are you looking for a particular part or just trying to understand the codebase better? ;) |
@ailzhang Thanks for your reply. just try to understand the codebase and figure out the relationship between PyTorch backend extension and XLA.😁 |
@ailzhang @taylanbil @dlibenzi I have a question why XLATensor has no storage while PyTorch Tensor has such storage. As a result, how does op deal with such no-storage XLATensor? I thought it for two days and can't understand it. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Having a storage is not a requirement for PyTorch, as long as you intercept the proper ATEN hooks and deal with the ops. |
❓ Questions and Help
Recently, I am looking into pytorch/xla code but I am confused with some things.
Is there pytorch-xla internal mechanism?
Any reply will be much appreciated. THX
The text was updated successfully, but these errors were encountered: