Skip to content

Remove legacy optimizer #2180

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 31 commits into from
Apr 21, 2025
Merged

Remove legacy optimizer #2180

merged 31 commits into from
Apr 21, 2025

Conversation

justinchuby
Copy link
Collaborator

@justinchuby justinchuby commented Apr 10, 2025

  • Remove legacy optimizer and support proto inputs with the IR based optimizer.
  • Add a new inline=True option in optimize() to control whether function inlining is done when optimizing
  • Implement identity folding for graph outputs
  • Migrate constant folding tests to run on IR models

Fix #2185

Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 7 out of 7 changed files in this pull request and generated no comments.

Comments suppressed due to low confidence (2)

onnxscript/optimizer/init.py:41

  • The conversion logic for onnx.ModelProto in the optimize function assumes in-place modification via Clear() and CopyFrom(), which may lead to unintended side effects. Consider reviewing this approach to ensure that modifying the input model in place is safe in all use cases.
assert isinstance(model, onnx.ModelProto)

onnxscript/optimizer/init.py:62

  • Clearing and subsequently copying back to model_proto may affect external references to the input model. It is recommended to verify that this in-place update is acceptable or switch to a functional model conversion approach.
model_proto.Clear()

Copy link

codecov bot commented Apr 10, 2025

❌ 18 Tests Failed:

Tests completed Failed Passed Skipped
14647 18 14629 2389
View the top 3 failed test(s) by shortest run time
onnxscript.backend.onnx_export_test.TestOnnxBackEnd::test_export2python_produces_correct_onnx_script_model_0390_test_floor
Stack Traces | 0.003s run time
onnxscript\backend\onnx_export_test.py:137: in extract_functions
    mod = importlib.import_module(import_name)
C:\hostedtoolcache\windows\Python\3.11.9\x64\Lib\importlib\__init__.py:126: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
E   ModuleNotFoundError: No module named 'tests.onnx_backend_test_code.test_floor'

The above exception was the direct cause of the following exception:
.nox\test_torch_nightly\Lib\site-packages\parameterized\parameterized.py:620: in standalone_func
    return func(*(a + p.args), **p.kwargs, **kw)
onnxscript\backend\onnx_export_test.py:271: in test_export2python_produces_correct_onnx_script_model
    functions = extract_functions(backend_test.name, code, self.test_folder)
onnxscript\backend\onnx_export_test.py:139: in extract_functions
    raise AssertionError(
E   AssertionError: Unable to import 'tests.onnx_backend_test_code.test_floor' (e=No module named 'tests.onnx_backend_test_code.test_floor') (file: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_floor.py', absolute path: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_floor.py', current folder: D:\a\onnxscript\onnxscript
E   ---- CONTENT --
E   import numpy
E   from onnx import TensorProto
E   from onnx.helper import make_tensor
E   from onnxscript import script, external_tensor
E   from onnxscript.values import Opset
E   from onnxscript.onnx_types import FLOAT
E   from onnxscript.onnx_opset import opset13
E   
E   @script()
E   def bck_test_floor(x: FLOAT[3,4,5]) -> (FLOAT[3,4,5]):
E       y = opset13.Floor(x)
E       return y
onnxscript.backend.onnx_export_test.TestOnnxBackEnd::test_export2python_produces_correct_onnx_script_model_0640_test_max_float64
Stack Traces | 0.003s run time
onnxscript\backend\onnx_export_test.py:137: in extract_functions
    mod = importlib.import_module(import_name)
C:\hostedtoolcache\windows\Python\3.11.9\x64\Lib\importlib\__init__.py:126: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
E   ModuleNotFoundError: No module named 'tests.onnx_backend_test_code.test_max_float64'

The above exception was the direct cause of the following exception:
.nox\test_torch_nightly\Lib\site-packages\parameterized\parameterized.py:620: in standalone_func
    return func(*(a + p.args), **p.kwargs, **kw)
onnxscript\backend\onnx_export_test.py:271: in test_export2python_produces_correct_onnx_script_model
    functions = extract_functions(backend_test.name, code, self.test_folder)
onnxscript\backend\onnx_export_test.py:139: in extract_functions
    raise AssertionError(
E   AssertionError: Unable to import 'tests.onnx_backend_test_code.test_max_float64' (e=No module named 'tests.onnx_backend_test_code.test_max_float64') (file: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_max_float64.py', absolute path: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_max_float64.py', current folder: D:\a\onnxscript\onnxscript
E   ---- CONTENT --
E   import numpy
E   from onnx import TensorProto
E   from onnx.helper import make_tensor
E   from onnxscript import script, external_tensor
E   from onnxscript.values import Opset
E   from onnxscript.onnx_types import DOUBLE
E   from onnxscript.onnx_opset import opset13
E   
E   @script()
E   def bck_test_max_float64(data_0: DOUBLE[3], data_1: DOUBLE[3]) -> (DOUBLE[3]):
E       result = opset13.Max(data_0, data_1)
E       return result
onnxscript.backend.onnx_export_test.TestOnnxBackEnd::test_export2python_produces_correct_onnx_script_model_0661_test_min_int64
Stack Traces | 0.003s run time
onnxscript\backend\onnx_export_test.py:137: in extract_functions
    mod = importlib.import_module(import_name)
C:\hostedtoolcache\windows\Python\3.11.9\x64\Lib\importlib\__init__.py:126: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
E   ModuleNotFoundError: No module named 'tests.onnx_backend_test_code.test_min_int64'

The above exception was the direct cause of the following exception:
.nox\test_torch_nightly\Lib\site-packages\parameterized\parameterized.py:620: in standalone_func
    return func(*(a + p.args), **p.kwargs, **kw)
onnxscript\backend\onnx_export_test.py:271: in test_export2python_produces_correct_onnx_script_model
    functions = extract_functions(backend_test.name, code, self.test_folder)
onnxscript\backend\onnx_export_test.py:139: in extract_functions
    raise AssertionError(
E   AssertionError: Unable to import 'tests.onnx_backend_test_code.test_min_int64' (e=No module named 'tests.onnx_backend_test_code.test_min_int64') (file: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_min_int64.py', absolute path: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_min_int64.py', current folder: D:\a\onnxscript\onnxscript
E   ---- CONTENT --
E   import numpy
E   from onnx import TensorProto
E   from onnx.helper import make_tensor
E   from onnxscript import script, external_tensor
E   from onnxscript.values import Opset
E   from onnxscript.onnx_types import INT64
E   from onnxscript.onnx_opset import opset13
E   
E   @script()
E   def bck_test_min_int64(data_0: INT64[3], data_1: INT64[3]) -> (INT64[3]):
E       result = opset13.Min(data_0, data_1)
E       return result

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 12 out of 12 changed files in this pull request and generated no comments.

Comments suppressed due to low confidence (3)

onnxscript/optimizer/_legacy/_optimizer.py:1

  • Ensure that removal of these legacy optimizer modules is fully covered by tests for the new IR-based optimizer to prevent any functionality regressions.
# Removed legacy optimizer implementation

onnxscript/optimizer/_function_folding_test.py:47

  • [nitpick] Verify that all test assertions have been updated consistently to use the new IR graph interface (e.g. treating 'optimized.graph' as a list of nodes) so that test coverage remains complete.
self.assertEqual(len(optimized.graph), 2)

onnxscript/optimizer/init.py:62

  • [nitpick] Double-check that converting from an ir.Model back to an onnx.ModelProto preserves all necessary metadata, as missing metadata could lead to subtle runtime issues.
def optimize(model: _ModelProtoOrIr, num_iterations: int = 2, *, onnx_shape_inference: bool = True, ...

@justinchuby justinchuby requested a review from Copilot April 12, 2025 05:21
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 12 out of 12 changed files in this pull request and generated no comments.

Comments suppressed due to low confidence (2)

onnxscript/optimizer/init.py:54

  • The docstring indicates that 'stop_if_no_change' has no effect yet the parameter remains in the API. Consider either fully supporting this parameter or clarifying its intended use to avoid confusion.
stop_if_no_change: Not supported currently (has no effect).

onnxscript/optimizer/_function_folding_test.py:67

  • [nitpick] The use of a tuple as a key to access a function may be unclear to readers; consider defining a constant for this key or adding a clarifying comment to improve readability.
function = optimized.functions[("local", "fun1", "")]

@justinchuby justinchuby enabled auto-merge (squash) April 21, 2025 20:59
@justinchuby justinchuby requested a review from gramalingam April 21, 2025 21:02
Copy link
Contributor

@titaiwangms titaiwangms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approve to unblock @justinchuby. @gramalingam feel free to follow up on this merged PR when you have time.

@justinchuby justinchuby merged commit 7d0e616 into main Apr 21, 2025
21 of 27 checks passed
@justinchuby justinchuby deleted the justinchu/remove-legacy branch April 21, 2025 21:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

Successfully merging this pull request may close these issues.

[optimizer] Implement rules from the legacy optimizer and clean up
4 participants