Skip to content

Implement _fft_* ops | feat(torchlib) #926

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 33 commits into from
Oct 26, 2023
Merged

Implement _fft_* ops | feat(torchlib) #926

merged 33 commits into from
Oct 26, 2023

Conversation

justinchuby
Copy link
Collaborator

@justinchuby justinchuby commented Jul 27, 2023

The change implements _fft_c2c, _fft_c2r and _fft_r2c. I extracted the common logic to _fftn_onnx, with the hope that we will be able to express this as a function when DFT supports dynamic axes: onnx/onnx#5447

@justinchuby justinchuby added the module: torchlib Related to the torch/aten function lib in development label Jul 27, 2023
@justinchuby justinchuby requested a review from BowenBao July 27, 2023 04:00
@codecov
Copy link

codecov bot commented Jul 27, 2023

Codecov Report

Merging #926 (83d644a) into main (4d7ac4d) will decrease coverage by 0.03%.
The diff coverage is 75.00%.

❗ Current head 83d644a differs from pull request most recent head 0bd5688. Consider uploading reports for the commit 0bd5688 to get more accurate results

@@            Coverage Diff             @@
##             main     #926      +/-   ##
==========================================
- Coverage   78.44%   78.41%   -0.03%     
==========================================
  Files         118      118              
  Lines       14954    15014      +60     
  Branches     1586     1597      +11     
==========================================
+ Hits        11730    11773      +43     
- Misses       2859     2876      +17     
  Partials      365      365              
Files Coverage Δ
onnxscript/function_libs/torch_lib/registration.py 83.05% <ø> (ø)
...ests/function_libs/torch_lib/error_reproduction.py 100.00% <100.00%> (ø)
...ipt/tests/function_libs/torch_lib/ops_test_data.py 96.15% <100.00%> (+0.01%) ⬆️
...ript/tests/function_libs/torch_lib/extra_opinfo.py 97.38% <60.00%> (-1.39%) ⬇️
onnxscript/function_libs/torch_lib/ops/fft.py 65.93% <79.06%> (+11.76%) ⬆️

... and 4 files with indirect coverage changes

@github-actions
Copy link

github-actions bot commented Jul 28, 2023

Test Results

         18 files  ±       0         18 suites  ±0   1h 36m 55s ⏱️ + 14m 55s
  11 078 tests  - 1 178    8 319 ✔️  - 925      2 722 💤  - 253       37 ±0 
158 784 runs  +     36  36 470 ✔️ +  36  120 460 💤 ±    0  1 854 ±0 

For more details on these failures, see this check.

Results for commit 0bd5688. ± Comparison against base commit 4d7ac4d.

This pull request removes 2637 and adds 1459 tests. Note that renamed tests count towards both.
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0000_test_reduce_sum_square_negative_axes_keepdims_example_expanded
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0001_test_reduce_l2_do_not_keepdims_example
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0002_test_cast_FLOAT_to_BFLOAT16
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0003_test_bitwise_not_2d
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0004_test_asin_example
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0005_test_optional_has_element_tensor_input
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0006_test_layer_normalization_3d_axis_negative_1_epsilon_expanded_ver18
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0007_test_hardsigmoid_example_expanded_ver18
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0008_test_sequence_map_add_1_sequence_1_tensor
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0009_test_gru_batchwise
…
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0014_test_hardmax_one_hot
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0015_test_maxunpool_export_without_output_shape
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0016_test_max_example
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0017_test_layer_normalization_4d_axis_negative_2_expanded_ver18
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0018_test_sequence_map_extract_shapes_expanded
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0019_test_hardmax_default_axis
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0020_test_sub_bcast
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0022_test_concat_2d_axis_1
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0023_test_reduce_prod_keepdims_example
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0024_test_min_float16
…
This pull request removes 441 skipped tests and adds 189 skipped tests. Note that renamed tests count towards both.
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0002_test_cast_FLOAT_to_BFLOAT16
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0005_test_optional_has_element_tensor_input
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0008_test_sequence_map_add_1_sequence_1_tensor
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0009_test_gru_batchwise
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0010_test_cast_FLOAT16_to_FLOAT8E5M2FNUZ
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0020_test_split_variable_parts_2d_opset13
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0036_test_pow_types_float32_uint64
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0045_test_scan9_sum
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0047_test_cast_FLOAT8E4M3FN_to_FLOAT16
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0050_test_cast_FLOAT_to_FLOAT8E5M2FNUZ
…
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0018_test_sequence_map_extract_shapes_expanded
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0033_test_castlike_FLOAT_to_FLOAT8E4M3FN_expanded
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0044_test_cast_FLOAT16_to_FLOAT8E4M3FNUZ
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0049_test_castlike_FLOAT_to_FLOAT8E5M2FNUZ_expanded
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0054_test_center_crop_pad_crop_negative_axes_hwc_expanded
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0055_test_castlike_BFLOAT16_to_FLOAT_expanded
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0057_test_max_uint16
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0063_test_min_uint8
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0066_test_batchnorm_epsilon
onnxscript.backend.onnx_export_test.TestOnnxBackEnd ‑ test_export2python_produces_correct_onnx_script_model_0068_test_maxunpool_export_with_output_shape
…

♻️ This comment has been updated with latest results.

@justinchuby justinchuby mentioned this pull request Jul 31, 2023
@justinchuby
Copy link
Collaborator Author

fft c2c shapes incorrect:
actual
Figure_1
expected
Figure_2

@justinchuby

This comment was marked as outdated.

@justinchuby
Copy link
Collaborator Author

After accounting for the batch dim

Summary

The output of ONNX Runtime does not match that of PyTorch when executing test
ops_test.TestOutputConsistencyEagerCPU.test_complex_output_match_opinfo__ops_aten__fft_c2c_cpu_complex64, sample 22 in ONNX Script TorchLib.

To recreate this report, use

CREATE_REPRODUCTION_REPORT=1 python -m pytest onnxscript/tests/function_libs/torch_lib/ops_test.py -k test_complex_output_match_opinfo__ops_aten__fft_c2c_cpu_complex64

Inputs

Shapes: ['Tensor<torch.Size([31]), dtype=torch.complex64>']

Details

kwargs = {'dim': (0,), 'normalization': 2, 'forward': True}
inputs = (tensor([-5.4834-8.1405j, -6.4395-5.7827j,  6.8528+5.5010j, -6.1100-7.1024j,
         8.0598+2.1363j,  4.1445+1.4055j, -3.3628-4.8623j,  4.8422-8.8907j,
        -7.4891+2.7178j, -2.3602+1.7215j,  4.8001-5.1720j, -5.0662-5.5355j,
        -7.8746-0.6092j,  7.6425-0.5501j,  4.9359-4.4759j,  0.8930-6.7581j,
        -0.5949+1.9750j, -3.8508+8.2630j,  2.5410+4.5017j,  8.9570+3.8151j,
        -0.7071+4.2984j,  8.8689-8.6646j, -1.6968+4.3704j, -2.1814-7.3222j,
        -5.2246-4.6778j,  3.7313+5.7299j, -1.0242-7.3642j,  1.2788+2.5162j,
        -3.3949+3.1033j, -3.6764+3.0298j,  6.4394+4.4885j]),)

Expected output

Shape: torch.Size([31, 2])

Details

expected = tensor([[ 2.4033e-01, -8.4950e-01],
        [-1.1390e+00,  1.3087e-02],
        [ 5.4357e-01, -2.0096e-01],
        [-1.5274e+00, -2.3665e-01],
        [-3.4732e-01, -9.2915e-02],
        [-5.1547e-01, -7.6201e-01],
        [ 2.9447e-02,  2.0503e-01],
        [-1.5932e-01,  1.8731e-01],
        [ 3.8123e-02,  1.1308e-01],
        [ 1.6187e-01, -8.5115e-01],
        [-3.2415e-01, -5.8877e-01],
        [-3.1471e+00,  1.6404e+00],
        [-9.9449e-01, -2.7970e-01],
        [ 3.3168e-01,  5.1865e-01],
        [-2.6733e-01,  4.8409e-01],
        [ 5.7879e-02, -3.0706e-01],
        [ 1.0201e+00,  1.1506e+00],
        [-8.4969e-03,  5.6365e-01],
        [-2.7631e-01, -1.7776e+00],
        [ 8.7430e-01, -2.1237e+00],
        [-2.8694e-01, -9.6380e-01],
        [ 5.2944e-05,  1.7100e-01],
        [ 1.3434e-01, -1.3284e+00],
        [-1.9910e+00, -1.2798e+00],
        [ 9.8181e-01,  5.2822e-01],
        [ 7.8974e-01, -1.9880e+00],
        [ 6.5259e-01,  3.6287e-01],
        [-1.2498e+00, -1.3931e+00],
        [ 9.2038e-01, -9.0553e-02],
        [-2.4528e-01,  1.4182e+00],
        [ 2.1982e-01, -3.8317e-01]])

Actual output

Shape: torch.Size([31, 2])

Details

actual = tensor([[ 7.4503e+00, -2.6335e+01],
        [-3.5309e+01,  4.0577e-01],
        [ 1.6851e+01, -6.2297e+00],
        [-4.7350e+01, -7.3362e+00],
        [-1.0767e+01, -2.8803e+00],
        [-1.5980e+01, -2.3622e+01],
        [ 9.1279e-01,  6.3560e+00],
        [-4.9388e+00,  5.8067e+00],
        [ 1.1818e+00,  3.5054e+00],
        [ 5.0180e+00, -2.6386e+01],
        [-1.0049e+01, -1.8252e+01],
        [-9.7559e+01,  5.0852e+01],
        [-3.0829e+01, -8.6707e+00],
        [ 1.0282e+01,  1.6078e+01],
        [-8.2873e+00,  1.5007e+01],
        [ 1.7942e+00, -9.5190e+00],
        [ 3.1625e+01,  3.5669e+01],
        [-2.6325e-01,  1.7473e+01],
        [-8.5656e+00, -5.5105e+01],
        [ 2.7103e+01, -6.5834e+01],
        [-8.8952e+00, -2.9878e+01],
        [ 1.7469e-03,  5.3010e+00],
        [ 4.1644e+00, -4.1179e+01],
        [-6.1722e+01, -3.9673e+01],
        [ 3.0436e+01,  1.6375e+01],
        [ 2.4481e+01, -6.1627e+01],
        [ 2.0230e+01,  1.1249e+01],
        [-3.8743e+01, -4.3186e+01],
        [ 2.8532e+01, -2.8073e+00],
        [-7.6042e+00,  4.3966e+01],
        [ 6.8143e+00, -1.1878e+01]])

Difference

Details

--- actual
+++ expected
@@ -1,31 +1,31 @@
-tensor([[ 7.4503e+00, -2.6335e+01],
-        [-3.5309e+01,  4.0577e-01],
-        [ 1.6851e+01, -6.2297e+00],
-        [-4.7350e+01, -7.3362e+00],
-        [-1.0767e+01, -2.8803e+00],
-        [-1.5980e+01, -2.3622e+01],
-        [ 9.1279e-01,  6.3560e+00],
-        [-4.9388e+00,  5.8067e+00],
-        [ 1.1818e+00,  3.5054e+00],
-        [ 5.0180e+00, -2.6386e+01],
-        [-1.0049e+01, -1.8252e+01],
-        [-9.7559e+01,  5.0852e+01],
-        [-3.0829e+01, -8.6707e+00],
-        [ 1.0282e+01,  1.6078e+01],
-        [-8.2873e+00,  1.5007e+01],
-        [ 1.7942e+00, -9.5190e+00],
-        [ 3.1625e+01,  3.5669e+01],
-        [-2.6325e-01,  1.7473e+01],
-        [-8.5656e+00, -5.5105e+01],
-        [ 2.7103e+01, -6.5834e+01],
-        [-8.8952e+00, -2.9878e+01],
-        [ 1.7469e-03,  5.3010e+00],
-        [ 4.1644e+00, -4.1179e+01],
-        [-6.1722e+01, -3.9673e+01],
-        [ 3.0436e+01,  1.6375e+01],
-        [ 2.4481e+01, -6.1627e+01],
-        [ 2.0230e+01,  1.1249e+01],
-        [-3.8743e+01, -4.3186e+01],
-        [ 2.8532e+01, -2.8073e+00],
-        [-7.6042e+00,  4.3966e+01],
-        [ 6.8143e+00, -1.1878e+01]])
+tensor([[ 2.4033e-01, -8.4950e-01],
+        [-1.1390e+00,  1.3087e-02],
+        [ 5.4357e-01, -2.0096e-01],
+        [-1.5274e+00, -2.3665e-01],
+        [-3.4732e-01, -9.2915e-02],
+        [-5.1547e-01, -7.6201e-01],
+        [ 2.9447e-02,  2.0503e-01],
+        [-1.5932e-01,  1.8731e-01],
+        [ 3.8123e-02,  1.1308e-01],
+        [ 1.6187e-01, -8.5115e-01],
+        [-3.2415e-01, -5.8877e-01],
+        [-3.1471e+00,  1.6404e+00],
+        [-9.9449e-01, -2.7970e-01],
+        [ 3.3168e-01,  5.1865e-01],
+        [-2.6733e-01,  4.8409e-01],
+        [ 5.7879e-02, -3.0706e-01],
+        [ 1.0201e+00,  1.1506e+00],
+        [-8.4969e-03,  5.6365e-01],
+        [-2.7631e-01, -1.7776e+00],
+        [ 8.7430e-01, -2.1237e+00],
+        [-2.8694e-01, -9.6380e-01],
+        [ 5.2944e-05,  1.7100e-01],
+        [ 1.3434e-01, -1.3284e+00],
+        [-1.9910e+00, -1.2798e+00],
+        [ 9.8181e-01,  5.2822e-01],
+        [ 7.8974e-01, -1.9880e+00],
+        [ 6.5259e-01,  3.6287e-01],
+        [-1.2498e+00, -1.3931e+00],
+        [ 9.2038e-01, -9.0553e-02],
+        [-2.4528e-01,  1.4182e+00],
+        [ 2.1982e-01, -3.8317e-01]])

Full error stack

Tensor-likes are not close!

Mismatched elements: 62 / 62 (100.0%)
Greatest absolute difference: 94.4123764038086 at index (11, 0) (up to 1e-05 allowed)
Greatest relative difference: 31.994916915893555 at index (21, 0) (up to 1.3e-06 allowed)
  File "/home/justinchu/dev/onnx-script/onnxscript/tests/function_libs/torch_lib/ops_test.py", line 259, in run_test_output_match
    torch.testing.assert_close(
  File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1520, in assert_close
    raise error_metas[0].to_error(msg)

@justinchuby justinchuby added the help wanted Extra attention is needed label Aug 10, 2023
@justinchuby justinchuby removed the help wanted Extra attention is needed label Oct 26, 2023
@justinchuby justinchuby marked this pull request as ready for review October 26, 2023 16:27
@justinchuby
Copy link
Collaborator Author

My bad

@justinchuby justinchuby merged commit 70843ef into main Oct 26, 2023
@justinchuby justinchuby deleted the justinchu/fft branch October 26, 2023 17:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: torchlib Related to the torch/aten function lib in development
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants