-
Notifications
You must be signed in to change notification settings - Fork 65
Implement _fft_*
ops | feat(torchlib)
#926
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## main #926 +/- ##
==========================================
- Coverage 78.44% 78.41% -0.03%
==========================================
Files 118 118
Lines 14954 15014 +60
Branches 1586 1597 +11
==========================================
+ Hits 11730 11773 +43
- Misses 2859 2876 +17
Partials 365 365
|
Test Results 18 files ± 0 18 suites ±0 1h 36m 55s ⏱️ + 14m 55s For more details on these failures, see this check. Results for commit 0bd5688. ± Comparison against base commit 4d7ac4d. This pull request removes 2637 and adds 1459 tests. Note that renamed tests count towards both.
This pull request removes 441 skipped tests and adds 189 skipped tests. Note that renamed tests count towards both.
♻️ This comment has been updated with latest results. |
This comment was marked as outdated.
This comment was marked as outdated.
After accounting for the batch dim SummaryThe output of ONNX Runtime does not match that of PyTorch when executing test To recreate this report, use CREATE_REPRODUCTION_REPORT=1 python -m pytest onnxscript/tests/function_libs/torch_lib/ops_test.py -k test_complex_output_match_opinfo__ops_aten__fft_c2c_cpu_complex64 InputsShapes: Details
kwargs = {'dim': (0,), 'normalization': 2, 'forward': True}
inputs = (tensor([-5.4834-8.1405j, -6.4395-5.7827j, 6.8528+5.5010j, -6.1100-7.1024j,
8.0598+2.1363j, 4.1445+1.4055j, -3.3628-4.8623j, 4.8422-8.8907j,
-7.4891+2.7178j, -2.3602+1.7215j, 4.8001-5.1720j, -5.0662-5.5355j,
-7.8746-0.6092j, 7.6425-0.5501j, 4.9359-4.4759j, 0.8930-6.7581j,
-0.5949+1.9750j, -3.8508+8.2630j, 2.5410+4.5017j, 8.9570+3.8151j,
-0.7071+4.2984j, 8.8689-8.6646j, -1.6968+4.3704j, -2.1814-7.3222j,
-5.2246-4.6778j, 3.7313+5.7299j, -1.0242-7.3642j, 1.2788+2.5162j,
-3.3949+3.1033j, -3.6764+3.0298j, 6.4394+4.4885j]),) Expected outputShape: Details
expected = tensor([[ 2.4033e-01, -8.4950e-01],
[-1.1390e+00, 1.3087e-02],
[ 5.4357e-01, -2.0096e-01],
[-1.5274e+00, -2.3665e-01],
[-3.4732e-01, -9.2915e-02],
[-5.1547e-01, -7.6201e-01],
[ 2.9447e-02, 2.0503e-01],
[-1.5932e-01, 1.8731e-01],
[ 3.8123e-02, 1.1308e-01],
[ 1.6187e-01, -8.5115e-01],
[-3.2415e-01, -5.8877e-01],
[-3.1471e+00, 1.6404e+00],
[-9.9449e-01, -2.7970e-01],
[ 3.3168e-01, 5.1865e-01],
[-2.6733e-01, 4.8409e-01],
[ 5.7879e-02, -3.0706e-01],
[ 1.0201e+00, 1.1506e+00],
[-8.4969e-03, 5.6365e-01],
[-2.7631e-01, -1.7776e+00],
[ 8.7430e-01, -2.1237e+00],
[-2.8694e-01, -9.6380e-01],
[ 5.2944e-05, 1.7100e-01],
[ 1.3434e-01, -1.3284e+00],
[-1.9910e+00, -1.2798e+00],
[ 9.8181e-01, 5.2822e-01],
[ 7.8974e-01, -1.9880e+00],
[ 6.5259e-01, 3.6287e-01],
[-1.2498e+00, -1.3931e+00],
[ 9.2038e-01, -9.0553e-02],
[-2.4528e-01, 1.4182e+00],
[ 2.1982e-01, -3.8317e-01]]) Actual outputShape: Details
actual = tensor([[ 7.4503e+00, -2.6335e+01],
[-3.5309e+01, 4.0577e-01],
[ 1.6851e+01, -6.2297e+00],
[-4.7350e+01, -7.3362e+00],
[-1.0767e+01, -2.8803e+00],
[-1.5980e+01, -2.3622e+01],
[ 9.1279e-01, 6.3560e+00],
[-4.9388e+00, 5.8067e+00],
[ 1.1818e+00, 3.5054e+00],
[ 5.0180e+00, -2.6386e+01],
[-1.0049e+01, -1.8252e+01],
[-9.7559e+01, 5.0852e+01],
[-3.0829e+01, -8.6707e+00],
[ 1.0282e+01, 1.6078e+01],
[-8.2873e+00, 1.5007e+01],
[ 1.7942e+00, -9.5190e+00],
[ 3.1625e+01, 3.5669e+01],
[-2.6325e-01, 1.7473e+01],
[-8.5656e+00, -5.5105e+01],
[ 2.7103e+01, -6.5834e+01],
[-8.8952e+00, -2.9878e+01],
[ 1.7469e-03, 5.3010e+00],
[ 4.1644e+00, -4.1179e+01],
[-6.1722e+01, -3.9673e+01],
[ 3.0436e+01, 1.6375e+01],
[ 2.4481e+01, -6.1627e+01],
[ 2.0230e+01, 1.1249e+01],
[-3.8743e+01, -4.3186e+01],
[ 2.8532e+01, -2.8073e+00],
[-7.6042e+00, 4.3966e+01],
[ 6.8143e+00, -1.1878e+01]]) DifferenceDetails
--- actual
+++ expected
@@ -1,31 +1,31 @@
-tensor([[ 7.4503e+00, -2.6335e+01],
- [-3.5309e+01, 4.0577e-01],
- [ 1.6851e+01, -6.2297e+00],
- [-4.7350e+01, -7.3362e+00],
- [-1.0767e+01, -2.8803e+00],
- [-1.5980e+01, -2.3622e+01],
- [ 9.1279e-01, 6.3560e+00],
- [-4.9388e+00, 5.8067e+00],
- [ 1.1818e+00, 3.5054e+00],
- [ 5.0180e+00, -2.6386e+01],
- [-1.0049e+01, -1.8252e+01],
- [-9.7559e+01, 5.0852e+01],
- [-3.0829e+01, -8.6707e+00],
- [ 1.0282e+01, 1.6078e+01],
- [-8.2873e+00, 1.5007e+01],
- [ 1.7942e+00, -9.5190e+00],
- [ 3.1625e+01, 3.5669e+01],
- [-2.6325e-01, 1.7473e+01],
- [-8.5656e+00, -5.5105e+01],
- [ 2.7103e+01, -6.5834e+01],
- [-8.8952e+00, -2.9878e+01],
- [ 1.7469e-03, 5.3010e+00],
- [ 4.1644e+00, -4.1179e+01],
- [-6.1722e+01, -3.9673e+01],
- [ 3.0436e+01, 1.6375e+01],
- [ 2.4481e+01, -6.1627e+01],
- [ 2.0230e+01, 1.1249e+01],
- [-3.8743e+01, -4.3186e+01],
- [ 2.8532e+01, -2.8073e+00],
- [-7.6042e+00, 4.3966e+01],
- [ 6.8143e+00, -1.1878e+01]])
+tensor([[ 2.4033e-01, -8.4950e-01],
+ [-1.1390e+00, 1.3087e-02],
+ [ 5.4357e-01, -2.0096e-01],
+ [-1.5274e+00, -2.3665e-01],
+ [-3.4732e-01, -9.2915e-02],
+ [-5.1547e-01, -7.6201e-01],
+ [ 2.9447e-02, 2.0503e-01],
+ [-1.5932e-01, 1.8731e-01],
+ [ 3.8123e-02, 1.1308e-01],
+ [ 1.6187e-01, -8.5115e-01],
+ [-3.2415e-01, -5.8877e-01],
+ [-3.1471e+00, 1.6404e+00],
+ [-9.9449e-01, -2.7970e-01],
+ [ 3.3168e-01, 5.1865e-01],
+ [-2.6733e-01, 4.8409e-01],
+ [ 5.7879e-02, -3.0706e-01],
+ [ 1.0201e+00, 1.1506e+00],
+ [-8.4969e-03, 5.6365e-01],
+ [-2.7631e-01, -1.7776e+00],
+ [ 8.7430e-01, -2.1237e+00],
+ [-2.8694e-01, -9.6380e-01],
+ [ 5.2944e-05, 1.7100e-01],
+ [ 1.3434e-01, -1.3284e+00],
+ [-1.9910e+00, -1.2798e+00],
+ [ 9.8181e-01, 5.2822e-01],
+ [ 7.8974e-01, -1.9880e+00],
+ [ 6.5259e-01, 3.6287e-01],
+ [-1.2498e+00, -1.3931e+00],
+ [ 9.2038e-01, -9.0553e-02],
+ [-2.4528e-01, 1.4182e+00],
+ [ 2.1982e-01, -3.8317e-01]]) Full error stack
|
My bad |
83d644a
to
0bd5688
Compare
The change implements
_fft_c2c
,_fft_c2r
and_fft_r2c
. I extracted the common logic to_fftn_onnx
, with the hope that we will be able to express this as a function whenDFT
supports dynamic axes: onnx/onnx#5447