-
Notifications
You must be signed in to change notification settings - Fork 72
[torchlib] Fix various implementations #2050
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
❌ 60 Tests Failed:
View the top 2 failed tests by shortest run time
View the full list of 1 ❄️ flaky tests
To view more test analytics, go to the Test Analytics Dashboard |
def aten_clamp_min(self: TReal, min_: TReal) -> TReal: | ||
"""clamp_min(Tensor self, Tensor min) -> Tensor""" | ||
|
||
# This implementation does not intent to handle when self is an empty tensor | ||
min_rank = Rank(min_) | ||
min_rank = len(min_.shape) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This breaks exporting scripts that use clamp(min=1e-12) (or any other float number) instead of a tensor, since floating point numbers don't have a .shape.
I expect the same issue appears in clamp_max but haven't tried to reproduce.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for reporting. I will create a patch today
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix issues reported in #2050 (comment)
Fix issues reported in #2050 (comment)
Fix implementations according to pytorch/pytorch#146224. Removed Eager Mode tests since we only care about the graph constructed. Changed all traceable ops to trace_only. Removed usage of the IsScalar function.