-
Notifications
You must be signed in to change notification settings - Fork 52
Add initial linear algebra function specifications #20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The spec is currently silent as to whether transpose returns a copy or a view.
It would be good to keep a tracking issue for this, that we add to any time we encounter a function where copy/view behavior is unclear or varies across libraries.
Are we satisfied with the design decisions included here?
All seems reasonable to me.
Further support for supporting all of NumPy's norms comes from pending updates to Torch (see pytorch/pytorch#42749).
Interface inspired by TensorFlow (see https://www.tensorflow.org/api_docs/python/tf/linalg/matrix_transpose) and Torch (see https://pytorch.org/docs/stable/generated/torch.transpose.html). Allows transposing a stack of matrices.
Re: copy/view. In this case, for |
Created gh-24 for this topic. |
I believe the copy/view discussion won't affect the content of this PR; whatever we end up doing there, there seems to be agreement that it's not possible to spec actual copy/view behaviour, and that therefore it should not matter (except performance-wise) whether a particular method returns a copy or a view. All other comments were addressed, so this should be good as is. Merging, thanks @kgryte. |
I un-ticked the linear algebra box in the tracking issue, because this was just the initial set - the more complex functions like |
This PR:
Notes
Questions
What data types should linear algebra functions support? All supported array data types or, e.g., just floating-point?
This proposal tries to thread the needle and specify that set of behavior which is most common among array libraries, while also ensuring consistency among the specified functions. For example,
2
vectors when computing the cross product (NumPy) or specifying axes independently (NumPy), as this is not supported by Torch or TensorFlow.transpose
returns a copy or a view.Are we satisfied with the design decisions included here?