-
Notifications
You must be signed in to change notification settings - Fork 5.8k
[API Compatiblity] add compat.softmax to compat with torch.softmax #74874
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #74874 +/- ##
==========================================
Coverage ? 88.67%
==========================================
Files ? 7
Lines ? 53
Branches ? 0
==========================================
Hits ? 47
Misses ? 6
Partials ? 0 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里为啥还分了这么多softmax。
只有两个softmax,compat.softmax将dim进行设置直接调用原来的softmax。
一共写了4个,它们类型不一样 |
python/paddle/tensor/softmax.py
Outdated
|
||
|
||
@softmax_param_ignore_alias | ||
def compat_softmax( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个实现放到对应位置里去吧,tensor/compat.py
里。paddle.compat.softmax
可以直接调 paddle.softmax
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已调整
python/paddle/tensor/softmax.py
Outdated
@@ -159,7 +164,156 @@ def softmax( | |||
[0.03205860, 0.08714432, 0.23688282, 0.64391426], | |||
[0.03205860, 0.08714432, 0.23688282, 0.64391426]]]) | |||
""" | |||
return _softmax_impl(x, axis, dtype, name, out=out) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个可以直接展开
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已展开
python/paddle/tensor/softmax.py
Outdated
|
||
@softmax_param_ignore_alias | ||
def compat_softmax( | ||
x: Tensor, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
compat下的签名风格就按 input/dim 来吧,与min/max/sort
这些保持一致。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已调整
python/paddle/compat.py
Outdated
@@ -19,5 +19,6 @@ | |||
sort, | |||
split, | |||
) | |||
from .tensor.softmax import compat_softmax as softmax |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
去 .tensor.compat 下实现。
python/paddle/tensor/softmax.py
Outdated
return _softmax_impl(x, axis, dtype, name, out=out) | ||
|
||
|
||
@softmax_param_ignore_alias |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已复用
python/paddle/tensor/__init__.py
Outdated
@@ -494,6 +494,8 @@ | |||
) | |||
from .to_string import set_printoptions # noqa: F401 | |||
|
|||
__all__ = ['softmax'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个不能放__all__里,这个要放到下面的tensor_method_func,才会bind到paddle.Tensor上
python/paddle/tensor/compat.py
Outdated
if in_dynamic_or_pir_mode(): | ||
outs_cast = input if dtype is None else _C_ops.cast(input, dtype) | ||
return paddle.assign(_C_ops.softmax(outs_cast, dim), out) | ||
else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
下面的这个老静态图的的分支不需要实现
python/paddle/tensor/compat.py
Outdated
Size2, | ||
) | ||
|
||
|
||
from paddle import nn | ||
from paddle.base.data_feeder import check_dtype, check_variable_and_dtype | ||
from paddle.base.framework import convert_np_dtype_to_dtype_ | ||
from paddle.base.layer_helper import LayerHelper |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
老静态图分支不需要实现,移除无需导入的东西
python/paddle/tensor/compat.py
Outdated
def softmax( | ||
input: Tensor, | ||
dim: int | None = None, | ||
_stacklevel: int = 3, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个能对齐到torch.softmax吗,这个_stacklevel参数
你是不是继承下ForbidKeywordsDecorator,隐藏处理下_stacklevel,然后外面的签名还是按input, dim, dtype, *, out
来
python/paddle/tensor/compat.py
Outdated
dtype = convert_np_dtype_to_dtype_(dtype) | ||
if in_dynamic_or_pir_mode(): | ||
outs_cast = input if dtype is None else _C_ops.cast(input, dtype) | ||
return paddle.assign(_C_ops.softmax(outs_cast, dim), out) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个直接 _C_ops.softmax(outs_cast, dim, out=out)
@@ -121,6 +121,7 @@ def softmax( | |||
:math:`axis + D` . Default is -1. | |||
dtype (str, optional): The data type of the output tensor, can be bfloat16, float16, float32, float64. | |||
name (str|None, optional): For details, please refer to :ref:`api_guide_Name`. Generally, no setting is required. Default: None. | |||
out (Tensor, optional): The output Tensor. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个能用 _C_ops.softmax(outs_cast, axis, out=out) 来实现吗
@@ -31,6 +31,7 @@ | |||
real, | |||
shape, | |||
) | |||
from .compat_softmax import softmax as softmax |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个得加到tensor_method_func,才会bind到paddle.Tensor上
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tensor_method_func一直都有加
@@ -403,6 +396,70 @@ def process( | |||
return args, kwargs | |||
|
|||
|
|||
class ForbidKeywordsIgnoreOneParamDecorator(DecoratorBase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个可以继承ForbidKeywordsDecorator,只抽出它的ignore逻辑,其他采用 super().init(*args, **kwargs)
,这样可以最大化复用代码。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改
/re-run all-failed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
/re-run coverage build |
PR Category
User Experience
PR Types
New features
Description
add compat.softmax to compat with torch.softmax